CN117750177A - Image display method, device, electronic equipment and medium - Google Patents

Image display method, device, electronic equipment and medium Download PDF

Info

Publication number
CN117750177A
CN117750177A CN202311740930.6A CN202311740930A CN117750177A CN 117750177 A CN117750177 A CN 117750177A CN 202311740930 A CN202311740930 A CN 202311740930A CN 117750177 A CN117750177 A CN 117750177A
Authority
CN
China
Prior art keywords
image
camera
application
input
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311740930.6A
Other languages
Chinese (zh)
Inventor
祝贤威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311740930.6A priority Critical patent/CN117750177A/en
Publication of CN117750177A publication Critical patent/CN117750177A/en
Pending legal-status Critical Current

Links

Landscapes

  • Telephone Function (AREA)

Abstract

The application discloses an image display method, an image display device, electronic equipment and a medium, and belongs to the technical field of shooting. The method is applied to electronic equipment comprising at least two cameras, and comprises the following steps: controlling a first application to acquire a first image through a first camera of the at least two cameras, and controlling a second application to acquire a second image through a second camera of the at least two cameras; combining the related information of the first image and the second image to obtain a third image; displaying a third image in a first interface, wherein the first interface is a picture display interface corresponding to a first camera in a first application; wherein the related information of the second image includes at least one of: and the second image, the picture element in the second image, the recognition result of the second image.

Description

Image display method, device, electronic equipment and medium
Technical Field
The application belongs to the technical field of shooting, and particularly relates to an image display method, an image display device, electronic equipment and a medium.
Background
With the development of electronic devices, the number of cameras in the electronic devices is also increasing, and different shooting requirements can be realized through different cameras in the electronic devices. For example, the user may trigger the electronic device to control the communication application to invoke the front camera of the electronic device to perform the video call; or the user can trigger the electronic device to call the rear camera of the electronic device to collect the two-dimensional code image, and the two-dimensional code in the two-dimensional code image is identified through the image identification application.
At present, in the process that the electronic equipment controls the communication application to call the front camera to carry out video call, if a user wants to collect a two-dimensional code image through the rear camera and share the two-dimensional code image with a video object, the user can trigger the electronic equipment to control the communication application to call the rear camera to carry out video call, so that the rear camera can collect the two-dimensional code image and transmit the two-dimensional code image to the video object through the communication application. Therefore, the application realizes image acquisition through different cameras by switching the called cameras, so that the flexibility of calling the cameras to acquire images by the application is poor.
Disclosure of Invention
The embodiment of the application aims to provide an image display method, an image display device, electronic equipment and a medium, which can solve the problem of poor flexibility of acquiring images by calling a camera by an application.
In a first aspect, an embodiment of the present application provides an image display method, where the method is applied to an electronic device including at least two cameras, and the method includes: controlling a first application to acquire a first image through a first camera of the at least two cameras, and controlling a second application to acquire a second image through a second camera of the at least two cameras; combining the related information of the first image and the second image to obtain a third image; displaying a third image in a first interface, wherein the first interface is a picture display interface corresponding to a first camera in a first application; wherein the related information of the second image includes at least one of: and the second image, the picture element in the second image, the recognition result of the second image.
In a second aspect, an embodiment of the present application provides an image display apparatus, including at least two cameras, the image display apparatus further including: the control module is used for controlling the first application to acquire a first image through a first camera of the at least two cameras and controlling the second application to acquire a second image through a second camera of the at least two cameras; the synthesizing module is used for synthesizing the related information of the first image and the second image to obtain a third image; the display module is used for displaying the third image synthesized by the synthesis module in a first interface, wherein the first interface is a picture display interface corresponding to the first camera in the first application; wherein the related information of the second image includes at least one of: and the second image, the picture element in the second image, the recognition result of the second image.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the first application can be controlled to acquire the first image through the first camera of the at least two cameras in the electronic device, the second application is controlled to acquire the second image of the at least two cameras through the second camera, the related information of the first image and the second image is synthesized to obtain the third image, and the third image is displayed in the interface of the first application corresponding to the first camera, so that the first application can acquire the second image through the second camera without switching the called cameras, and the synthesized image including the related information of the second image, namely the third image, is displayed in the first interface, and the flexibility of acquiring and displaying the images through the cameras by the application can be improved.
Drawings
Fig. 1 is a schematic flow chart of an image display method according to an embodiment of the present application;
fig. 2A is an example schematic diagram of image elements in an image display method provided in an embodiment of the present application;
fig. 2B is a schematic diagram of an example of transferring image data collected by different cameras to an application in the image display method according to the embodiment of the present application;
FIG. 3 is a schematic diagram of an interface of an application of an image display method according to an embodiment of the present application;
FIG. 4 is a second schematic diagram of an interface of an application of the image display method according to the embodiment of the present application;
FIG. 5A is a third exemplary interface diagram of an application of the image display method according to the embodiment of the present application;
FIG. 5B is a fourth exemplary diagram of an interface for an application of the image display method according to the embodiment of the present application;
FIG. 6A is a fifth exemplary diagram of an interface for an application of an image display method according to an embodiment of the present disclosure;
FIG. 6B is a diagram illustrating an interface of an application of the image display method according to the embodiment of the present application;
FIG. 7A is a diagram of an interface for applying the image display method according to the embodiment of the present application;
FIG. 7B is a schematic diagram of an interface of an application of the image display method according to the embodiment of the present application;
FIG. 7C is a diagram illustrating an interface of an application of the image display method according to the embodiment of the present application;
FIG. 7D is a schematic view of an interface of an application of the image display method according to the embodiment of the present application;
FIG. 7E is an eleventh view of an interface of the image display method according to the embodiment of the present application;
fig. 8 is a schematic structural diagram of an image display device provided in an embodiment of the present application;
fig. 9 is one of schematic structural diagrams of an electronic device according to an embodiment of the present application;
fig. 10 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms "first," "second," and the like in the description of the present application, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. In addition, "and/or" in the specification means at least one of the connected objects, and the character "/", generally means a relationship in which the associated objects are one kind of "or".
The terms "at least one", and the like in the description of the present application refer to any one, any two, or a combination of two or more of the objects that it comprises. For example, at least one of a, b, c (item) may represent: "a", "b", "c", "a and b", "a and c", "b and c" and "a, b and c", wherein a, b, c may be single or plural. Similarly, the term "at least two" means two or more, and the meaning of the expression is similar to the term "at least one".
The image acquisition method, the device, the electronic equipment and the medium provided by the embodiment of the application are described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
The execution body of the image display method provided in the embodiment of the present application may be an electronic device, including a mobile electronic device or a non-mobile electronic device, or may be a functional module or a functional entity capable of implementing an image acquisition method in the electronic device, which may be specifically determined according to actual use requirements, and embodiments of the present invention are not limited. The image acquisition method provided in the embodiment of the present application is described below by taking an example in which the electronic device including at least two cameras executes the image acquisition method.
Fig. 1 shows a flowchart of an image display method provided in an embodiment of the present application, and as shown in fig. 1, the image display method provided in an embodiment of the present application may include the following steps 101 to 103.
Step 101, the electronic device controls a first application to acquire a first image through a first camera of at least two cameras, and controls a second application to acquire a second image through a second camera of the at least two cameras.
In some embodiments, the first camera and the second camera are different cameras in the electronic device. For example, the first camera is a front camera and the second camera is a rear camera.
In some embodiments, the second camera may include at least one camera, such as the second camera being one camera in the electronic device, or the second camera being 2 cameras in the electronic device. And the method is determined according to the camera calling requirement of the second application.
In some embodiments, the first application may be a communication application, a camera application, or any image-related application.
In some embodiments, the first application and the second application may be the same or different.
It will be appreciated that the application needs to call the camera first and then acquire the image through the camera. It should be noted that, the term "call" in this embodiment of the present application means a successful call, that is, after an application calls a camera, the camera starts to collect image data, and transmits the image data to the application, so that the application stores, shares, synthesizes, cuts or identifies the image data.
In some embodiments, the order in which the first application and the second application call the corresponding cameras is not limited.
Step 102, the electronic device synthesizes the related information of the first image and the second image to obtain a third image.
Wherein the related information of the second image may include at least one of: and the second image, the picture element in the second image, the recognition result of the second image.
In some embodiments, the image elements in the second image may also be referred to as objects in the second image.
For example, as shown in fig. 2A, the image 20 includes 3 image elements, which are a house 21, a tree 22, and an elderly person 23, respectively.
In some embodiments, the electronic device may directly combine the image elements in the second image with the first image, or the electronic device may combine the processed image elements in the second image with the first image.
For example, the electronic device may control the second application to perform a beautifying process on the portrait in the second image, and then synthesize the beautified portrait with the first image to obtain the third image.
In some embodiments, the recognition result of the second image may include, but is not limited to: the name, address, longitude and latitude, corresponding link, corresponding result page, etc. of the image element in the second image.
For example, the second image is a Chinese map, and the recognition result of the second image may be Chinese.
For example, the second image includes apple trees, and the recognition result of the second image may be the Chinese character "apple".
For example, assuming that the second image includes a square, the recognition result of the second image may be the name, address, longitude and latitude of the square, and so on.
In some embodiments, assuming that the related information of the second image is the second image, the electronic device may control the first application to synthesize the image data of the second image with the image data of the first image to obtain the third image.
For example, as shown in fig. 2B, assuming that the application a calls the camera 1 and the application 2 calls the camera 2, the electronic device may transmit both the image data collected by the camera 1 and the image data collected by the camera 2 to the application a to synthesize the two image data by the application a to obtain a synthesized image, that is, a third image.
In some embodiments, synthesizing the relevant information of the first image and the second image may include, but is not limited to, any of the following:
superimposing the related information of the second image on an image area of the first image to obtain a third image;
Splicing the related information of the first image and the second image to obtain a third image;
and fusing the related information of the first image and the second image to obtain a third image.
In some embodiments, the electronic device may first control the second application to transmit the related information of the second image to the first application, and the first application synthesizes the related information of the first image and the second image to obtain the third image.
In some embodiments, the electronic device may control the first application to transmit the first image to the third application and control the second application to transmit information related to the second image to the third application, and the third application may compose the information.
In some embodiments, after the electronic device controls the second application to collect the second image, the second application may be controlled to process the second image through related functions in the second application to obtain related information of the second image.
For example, the electronic device may identify the second image through an image identification function or a code scanning function of the second application, so as to obtain an identification result of the second image.
Step 103, the electronic device displays a third image in the first interface.
The first interface is a picture display interface corresponding to the first camera in the first application.
In some embodiments, the visual presentation interface may include any of the following: video recording interface, image preview interface, video call request interface. Therefore, the electronic device can display the composite image obtained according to the related information of the image acquired by the first camera and the image acquired by the second camera in the video recording interface, the image preview interface, the video call interface and the video call request interface corresponding to the first camera in the first application, so that the first application can execute the same operation as the image acquired by the first camera on the composite image, for example, share the composite image with friends, make the composite image into videos and the like, and the processing flexibility of the application on the images acquired by different cameras can be improved.
It will be appreciated that the purpose of the electronic device displaying the third image in the first interface is to further process the third image through the third interface.
For example, assuming that the first interface is a video call interface, the third image may be shared to the call friend through the first interface.
For example, assuming that the first interface is a video recording interface, the third image may be spliced with the already recorded video clip by the first interface to be synthesized into one video.
It can be appreciated that the image display method provided in the embodiment of the present application may be applied to a case where a user wants to add additional information to a scene of an image acquired by a first camera when invoking the first camera to acquire the image through a first application, such as performing video recording or video communication.
For example, application a invokes camera 1 (e.g., front camera) to record video (e.g., video chat), and the user needs to use camera 2 to perform an operation, such as: when the rear camera is used for scanning codes, photographing or video recording, the user needs to combine the image acquired by the camera 2 or the identification result of the image with the image acquired by the camera 1 so as to add the combined image into the recorded video picture.
The image display method provided in the embodiment of the present application is described below with reference to examples.
Example 1, assume that in the process of performing a video call with a friend through a front camera (a first camera), if a user wants to collect an image of a certain two-dimensional code through a rear camera (i.e., a second camera) for the friend to see, the user can trigger the electronic device to collect the image through the rear camera. Then, the electronic equipment can synthesize an image area where the two-dimensional code in the image is positioned with an image acquired by the front-facing camera through communication application; and as shown in fig. 3, the electronic device may send the composite image 31 to the friend through the video interface 30 of the communication application, it can be seen that the composite image 31 may include a portrait 32 collected by the front camera and a two-dimensional code 33 in the image collected by the rear camera.
Example 2, assuming that the first application has a video recording function and does not have a two-dimensional code recognition function, then: when a first application records a video related to a certain content through a front-end camera, and a certain place of the content is related to a two-dimensional code through the explanation or introduction, for example, the explanation knowledge of the place requires a viewer to scan the code from the recorded video or click a link of the two-dimensional code to view the detailed content, then the user can trigger the electronic device to control a second application to acquire a two-dimensional code image through a rear-end camera, and recognize the two-dimensional code image through a two-dimensional code recognition function in the second application so as to obtain a link corresponding to the two-dimensional code, and the link is added in a picture acquired by the front-end camera so as to obtain a composite image, so that as shown in fig. 4, the electronic device can display the composite image 41 in a video interface 40 of the communication application, and can see that the composite image 41 can comprise a portrait 42 acquired by the front-end camera and a link 43 corresponding to the two-dimensional code in the image acquired by the rear-end camera.
In the image acquisition method provided by the embodiment of the application, the first application can be controlled to acquire the first image through the first camera, the second application can be controlled to acquire the second image through the second camera, the related information of the first image and the second image is synthesized to obtain the third image, and the third image is displayed in the interface of the first application corresponding to the first camera, so that the first application can acquire the second image through the second camera without switching the called camera, and the synthesized image (namely the third image) comprising the related information of the second image is displayed in the first interface, and the flexibility of acquiring and displaying the images through the cameras can be improved.
In some embodiments, the step 101 may include the following steps 101a and 101b.
Step 101a, the electronic device receives a first input of a user when the first application successfully invokes the first camera.
Step 101b, the electronic device responds to the first input, controls the first application to keep calling the first camera, and controls the second application to call the second camera so as to control the first application to collect the first image through the first camera and control the second application to collect the second image through the second camera.
In some embodiments, the first input includes, but is not limited to: the user inputs through a touch device such as a finger or a stylus, or a voice command input by the user, or a specific gesture input by the user, or other feasibility inputs. The specific determination may be determined according to actual use requirements, and the embodiment of the invention is not limited.
In some embodiments of the present application, the specific gesture may be any one of a single-click gesture, a swipe gesture, a drag gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture.
In some embodiments of the present application, the click input may be a single click input, a double click input, or any number of click inputs, and may also be a long press input or a short press input.
In this embodiment of the present application, after the first application successfully invokes the first camera, the first application may perform image acquisition through the first camera, for example, take a photograph or record a video or perform a video call.
In some embodiments, the first input may be used to trigger the electronic device to control the second application to invoke the second camera and to capture an image through the second camera while maintaining the first application invoking the first camera.
It can be understood that after the second application invokes the second camera, the picture acquired by the second camera can be acquired.
In some embodiments, in the first manner, when the first camera is already invoked by the first application, the user may first open the second application, and then trigger, through a first input to the second application, to open a first function related to invoking the camera, such as a function of scanning a two-dimensional code, of the second application, so that the second application invokes the second camera, acquires a second image through the second camera, and processes the second image through the first function, so as to obtain related information of the second image. In this case, the user may manually trigger the electronic device to combine the related information of the second image with the image acquired by the first camera, such as the first image.
In some embodiments, in the second manner, in the case that the first camera has been invoked by the first application, the user may trigger the second application to invoke the second camera by inputting an image presentation interface corresponding to the first camera in the first application, such as the following first interface. In this case, the electronic device may transmit, by default, the related information of the image collected by the second camera to the first application, so that the first application synthesizes the related information with the image collected by the first camera.
Therefore, compared with the scheme that in the related art, after the application is controlled to stop calling the one camera, the other application is controlled to call the other camera, the image display method provided by the embodiment of the invention can improve the operation flexibility of the application calling camera.
In some embodiments, in the second manner, in the case where the first application is the same as the second application, the step 101a may include the following steps a to C, and the step 101b may include the following step D.
And step A, the electronic equipment receives a first sub-input of a user to the first interface under the condition that the first application successfully calls the first camera.
And B, the electronic equipment responds to the first sub-input, and at least one identifier is displayed in the first interface.
The first interface may include an image acquired by the first camera, and each of the at least one identifier indicates one of the at least one camera.
In some embodiments, the at least one identifier indicates the remaining cameras of the at least two cameras except for the first camera, i.e. the electronic device does not display the identifier of the camera that has been invoked.
Of course, in a practical implementation, in the case that one camera has been called, the electronic device may still display the identities of all the cameras in the electronic device, so that other applications continue to call the one camera.
It will be appreciated that after the first application successfully invokes the first camera, the first camera may begin to capture an image, and after each image is captured, the image may be displayed in the first interface, so that the user may view the image.
In some embodiments, the at least two cameras may include a first camera and a second camera.
In some embodiments, the first sub-input is used to trigger the electronic device to display the identities of all cameras in the electronic device.
In some embodiments, the first sub-input includes, but is not limited to: the user performs touch input on the first interface through a touch device such as a finger or a stylus pen, or inputs a voice command input by the user, or inputs a specific gesture input by the user, or inputs other feasibility. The specific determination may be determined according to actual use requirements, and the embodiment of the invention is not limited.
In some embodiments of the present application, the specific gesture may be any one of a single-click gesture, a swipe gesture, a drag gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture.
In some embodiments of the present application, the click input may be a single click input, a double click input, or any number of click inputs, and may also be a long press input or a short press input.
An electronic device is exemplified as a mobile phone. As shown in fig. 5A, the mobile phone displays a video capturing interface 50 corresponding to the rear camera in the camera application, where the video capturing interface 50 includes an image 51 collected by the rear camera. The user may make a first input to the image element 52 in the image, e.g. the user may click on the image element 52, and the electronic device may display indication 3 identifications 53, 54 and 55, each indicating a camera in the electronic device, as shown in fig. 5B.
And C, the electronic equipment receives a second sub-input of the user to the first identifier in the at least one identifier.
Wherein the first identifier indicates a second camera of the at least two cameras.
In some embodiments, the second sub-input includes, but is not limited to: the user performs touch input on the first interface through a touch device such as a finger or a stylus pen, or inputs a voice command input by the user, or inputs a specific gesture input by the user, or inputs other feasibility. The specific determination may be determined according to actual use requirements, and the embodiment of the invention is not limited.
In some embodiments of the present application, the specific gesture may be any one of a single-click gesture, a swipe gesture, a drag gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture.
In some embodiments of the present application, the click input may be a single click input, a double click input, or any number of click inputs, and may also be a long press input or a short press input.
And D, the electronic equipment responds to the second sub-input, controls the first application to keep calling the first camera, and controls the second application to call the second camera so as to control the first application to acquire a first image through the first camera and control the second application to acquire a second image through the second camera.
Therefore, the electronic equipment can be triggered to display the identifications of at least two cameras through the input of the image display interface corresponding to the first camera in the first application, and the second application is triggered to call the second camera through the input of the identifications, so that after the second application obtains the related information of the image collected by the second camera, the electronic equipment can directly synthesize the related information with the image collected by the first camera, and the user does not need to manually trigger to synthesize and display the image to the image display interface, and the operation process of synthesizing and displaying the images collected by different cameras can be simplified.
In some embodiments, in the first manner, if the user directly inputs to the second application, the electronic device is triggered to control the second application to call the second camera, and after the second camera collects the second image, the user may manually trigger the electronic device to synthesize the relevant information of the first image and the second image, so as to obtain the third image.
In some embodiments of the present application, in the first manner, before the step 102, the image display method provided in the embodiments of the present application may further include the following step 104, and the step 102 may include the following steps 102a and 102b.
Step 104, the electronic device displays the first interface and displays the second image.
Wherein the first interface includes a first image therein.
In some embodiments, the electronic device may display the first interface after the first application successfully invokes the first camera, and display an image, such as a first image, acquired by the first camera in the first interface. After the second application invokes the second camera, the electronic device may keep displaying the first interface and display an image acquired by the second camera, such as a second image.
In some embodiments, the electronic device may superimpose and display the second image on the first interface, i.e., the second image may be displayed in a picture-in-picture manner; or the electronic equipment can display the first interface and the second interface in a split screen mode, the second interface comprises a second image, and the second interface is an interface corresponding to the second camera in the second application.
For example, assuming that the electronic device is a mobile phone, as shown in fig. 6A, the mobile phone may display a first interface 60, a first image 61 may be included in the first interface 60, and a second image 62 is displayed on the first interface 60 in an overlaid manner.
In some embodiments, the second image may be displayed in an image presentation interface of the second application corresponding to the second camera, and of course, the electronic device may also directly display the second image.
Step 102a, the electronic device receives a second input of a user moving from a second image to the first image.
In some embodiments, the second input is for triggering the electronic device to synthesize information related to the second image with the first image.
In some embodiments, the second input may be a sliding input or a drag input.
Step 102b, the electronic device synthesizes the related information of the first image and the second image in response to the second input, and obtains a third image.
For the description of the related information of the second image, refer to the description of the related information of the image in step 102, and in order to avoid repetition, the description is omitted here.
Therefore, the user can manually trigger the electronic device to synthesize the related information of the first image and the second image, for example, after the second camera collects the image meeting the synthesis requirement, the user can trigger the electronic device to synthesize the related information of the image collected by the second camera and the image collected by the first camera, so that the synthesized image can meet the synthesis requirement of the user, and the operation flexibility of image synthesis is improved.
In some embodiments of the present application, the related information of the second image is an image element in the second image; the step 102a may include the following step E, and the step 102b may include the following step F.
And E, the electronic device receives a second input of a user moving from the first area of the second image to the second area of the first image.
In some embodiments, the first image may include at least one image element therein, and the second image may include at least one image element therein.
In some embodiments, one or more image elements in the second image may be included in the first region.
In some embodiments, the second region may include one or more image elements in the first image, or the second region may be a blank region in the first image.
And F, the electronic equipment synthesizes the image elements in the second area of the second image into the first area in the first image to obtain a third image.
In some embodiments, the electronic device may overlay the first region in the first image with the image elements in the second region to obtain the third image.
In some embodiments, the electronic device may first crop out the second region in the first image and then stitch the image elements in the second region of the second image to the first region in the first image to obtain the third image.
In some embodiments, the display size of the image elements in the second region in the third image may be the same as their display size in the second image.
Or, the display size of the image elements in the second area in the third image may be adaptively adjusted according to the blank size of the area where the second area is located, that is, the adjustment is performed on the basis of not covering the image elements in the first image.
The image display method provided in the embodiment of the present application is described below with reference to examples.
Illustratively, as shown in fig. 6A, the first image 61 includes a scene 63, the second image 62 includes a person 63, and if the user wants to show the person 64 below the scene 63, the user can press the person 64 long in the second image and slide to below the scene 63 to trigger the electronic device to synthesize the person 64 below the scene 63 in the first image, thereby obtaining a third image. As shown in fig. 6B, the electronic device may display a third image 65 in the first interface, and it can be seen that the third image 65 includes a person 64 therein.
Further, the electronic device may optimize the character 64 through the second application and then synthesize it with the first image.
In some embodiments, assuming that the image element in the first region is a first image element, if the first application is recording a video or a video call, the electronic device may automatically compose the first image element in the image subsequently acquired by the second camera into the second region in the image subsequently acquired by the first camera.
For example, referring to fig. 6A and 6B, the electronic device may synthesize a person 64 subsequently captured by the second camera below a scene 63 in an image subsequently captured by the first camera. The person 64 and scene 63 may be referred to as an associated image element at this point.
Thus, the user can manually select the image elements in the images acquired by the first camera from the second image, so that the operation flexibility of combining one image with the image elements in the other image can be improved.
In some embodiments, before the step 102, the image display method provided in the embodiments of the present application may further include the following steps 105 to 108.
Step 105, the electronic device receives a third input from the user to the first interface.
The first interface includes an image acquired by the first camera, such as a first image.
In some embodiments, the third input includes, but is not limited to: the user inputs the touch input to the first interface through a touch device such as a finger or a stylus, or inputs a voice command input by the user, or inputs a specific gesture input by the user, or inputs other feasibility. The specific determination may be determined according to actual use requirements, and the embodiment of the invention is not limited.
In some embodiments of the present application, the specific gesture may be any one of a single-click gesture, a swipe gesture, a drag gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture.
In some embodiments of the present application, the click input may be a single click input, a double click input, or any number of click inputs, and may also be a long press input or a short press input.
For example, the third input is a long press input by the user on the first interface.
And 106, the electronic equipment responds to the third input and displays the identification of the first camera and the identification of the second camera.
In some embodiments, the electronic device may display an identification of the first camera and an identification of the second camera on the first interface.
Step 107, the electronic device receives a fourth input of the user on the identifier of the first camera and the identifier of the second camera.
In some embodiments, the fourth input includes, but is not limited to: the user inputs the identification of the first camera and the identification of the second camera through a touch device such as a finger or a handwriting pen, or inputs a voice command input by the user, or inputs a specific gesture input by the user, or inputs other feasibility. The specific determination may be determined according to actual use requirements, and the embodiment of the invention is not limited.
In some embodiments of the present application, the specific gesture may be any one of a single-click gesture, a swipe gesture, a drag gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture.
In some embodiments of the present application, the click input may be a single click input, a double click input, or any number of click inputs, and may also be a long press input or a short press input.
For example, the third input is a single click input by the user on the identity of the first camera and the identity of the second camera in sequence.
Step 108, the electronic device responds to the fourth input, and sets the transmission directions of the image acquired by the first camera and the image acquired by the second camera.
The transmission direction can be used for indicating the application of combining the image acquired by the first camera and the image acquired by the second camera and displaying the combined image.
In some embodiments, the electronic device may set a transmission direction of the image collected by the first camera and the image collected by the second camera according to an input timing sequence of the fourth input to the identifier of the first camera and the identifier of the second camera.
For example, the user clicks the identifier of the first camera first and then clicks the identifier of the second camera, and then the electronic device may set that the first application synthesizes the images collected by the first camera and the second camera, and the first application displays the synthesized images.
In some embodiments, the electronic device may set a transmission direction of the image captured by the first camera and the image captured by the second camera according to an input direction of the fourth input.
For example, assuming that the fourth input is a sliding input by which the user slides from the identification of the first camera to the identification of the second camera, the electronic device may set up that the images acquired by the first camera and the second camera are synthesized by the second application, and the synthesized images are presented by the second application.
In some embodiments, the electronic device may control the application that invokes the camera to process the image collected by the camera, such as trimming, clipping, adding a filter, and then transmit the processed image to the application that performs image synthesis.
In some embodiments, in the case where multiple cameras in the electronic device each perform image acquisition, the user may manually specify the transmission direction of the images acquired by the multiple cameras. The transmission direction of the image is used for determining the application of processing the image and displaying the final composite image.
Steps 105 to 108 are explained below with reference to examples.
Illustratively, assume that the electronic device includes a camera 1, a camera 2, and a camera 3, wherein the first camera is the camera 1, the second camera includes the camera 2 and the camera 3, and the first application records video through the camera 1. If the user needs to shoot or record video through the camera 2 and the camera 3, and then the acquired pictures of the camera 2 and the camera 3 are synthesized into the video picture recorded through the camera 1, then: the user may input a camera preview interface corresponding to the camera 1 in the first application, as shown in fig. 7A, the electronic device may display the identifier 71 of the camera 1, the identifier 72 of the camera 2, and the identifier 73 of the camera 3, and then the user may slide from the identifier 72 to the identifier 71 and from the identifier 73 to the identifier 71, so as to trigger the electronic device to collect the camera 2 and the camera 3, and synthesize the image after the second application processing, such as the person beautifying processing or the background blurring processing, into the image collected by the camera 1, and display the synthesized image in the first application.
Specifically, as shown in fig. 7B, assuming that the image collected by the camera 1 is an image 74, the image collected by the camera 2 is an image 75, and the image collected by the camera 3 is an image 76, then: as shown in fig. 7C, the electronic device may display the composite image 77 in the image presentation interface 78 corresponding to the camera 1 in the first application, and it can be seen that the image 77 includes the image elements from the image 74 to the image 76. It should be noted that, the words "camera 1", "camera 2", and "camera 3" in fig. 7B are all used to illustrate the source of the image, and may not be displayed in actual implementation, and the arrows in fig. 7B are only used to illustrate the transmission direction of the image, and may not be displayed in actual implementation.
Further exemplary, it is assumed that the electronic device comprises a camera 1, a camera 2 and a camera 3, wherein the second camera is the camera 2, the second camera comprises the camera 1 and the camera 3, and the first application records video via the cameras 1, 3, respectively. If the user needs to compose the picture acquired by the camera 2 into the video pictures recorded by the cameras 1 and 3, then: the user can input the video preview interface corresponding to the camera 1 in the plurality of pairs of first applications, so that the electronic device can transmit the image acquired by the camera 2 to the camera 1, and the image acquired by the camera 2 is designated to be transmitted to the camera 3 through inputting the video preview interface corresponding to the camera 3 in the first applications. The electronic equipment is triggered to collect the camera 2, and the images after the second application processing, such as the people beautifying processing or the background blurring processing, are synthesized into the images collected by the cameras 1 and 3, and the corresponding synthesized images are displayed in the corresponding image preview interfaces of the first application.
Specifically, as shown in fig. 7D, assuming that the image collected by the camera 1 is an image 74, the image collected by the camera 2 is an image 75, and the image collected by the camera 3 is an image 76, then: as shown in fig. 7E, the electronic device may display the composite image 80 and the composite image 82 in one image presentation interface 79 of the first application. It can be seen that the composite image 80 includes image elements from image 74 and image 75, and the composite image 81 includes image elements from image 75 and image 76. It should be noted that, the words "camera 1", "camera 2", and "camera 3" in fig. D are all used to illustrate the source of the image, and may not be displayed in actual implementation, and the arrows in fig. 7D are only used to illustrate the transmission direction of the image, and may not be displayed in actual implementation.
Then, the user firstly uses the local recording application (namely the first application) to call the rear camera to record the video, and then uses the communication application to call the front camera to chat the video; later, the user wants to share part of pictures recorded by the rear camera with the video chat object, but does not want to switch the cameras for calling the communication application, so that the user can trigger the electronic equipment to synthesize the pictures collected by the rear camera into the pictures collected by the front camera through the input of each video interface in the communication application, and both the pictures collected by the rear camera and the pictures collected by the front camera are shared with the video object through the video interface.
It is to be understood that the foregoing method embodiments, or various possible implementation manners in the method embodiments, may be executed separately, or may be executed in combination with each other on the premise that no contradiction exists, and may be specifically determined according to actual use requirements, which is not limited by the embodiment of the present application.
According to the image display method provided by the embodiment of the application, the execution subject can be an image display device. In the embodiment of the present application, an image display device is described by taking an example in which the image display device performs an image display method.
Fig. 8 illustrates an image display device provided in an embodiment of the present application, where the image display device provided in the embodiment of the present application may include at least two cameras, as shown in fig. 8, the image display device 800 may further include:
a control module 801, configured to control a first application to acquire a first image through a first camera of the at least two cameras, and control a second application to acquire a second image through a second camera of the at least two cameras;
a synthesizing module 802, configured to synthesize the related information of the first image and the second image to obtain a third image;
the display module 803 is configured to display the third image synthesized by the synthesis module in a first interface, where the first interface is a picture display interface corresponding to the first camera in the first application;
wherein the related information of the second image includes at least one of: the second image, the image elements in the second image, the recognition result of the second image.
In some embodiments, the image display device further comprises a receiving module;
the receiving module is used for receiving a first input of a user under the condition that the first application successfully calls the first camera;
The control module is specifically configured to respond to the first input received by the receiving module, control the first application to keep calling the first camera, and control the second application to call the second camera, so as to control the first application to collect a first image through the first camera and control the second application to collect a second image through the second camera.
In some embodiments, the second application is the same as the first application;
the receiving module is specifically configured to receive a first sub-input of a user to the first interface, where the first interface includes an image acquired by the first camera; and in response to the first sub-input, displaying at least one identifier in the first interface, each identifier indicating one of the at least two cameras; receiving a second sub-input of a first identifier of the at least one identifier by a user, wherein the first identifier indicates the second camera;
the control module is specifically configured to respond to the second sub-input, control the first application to keep calling the first camera, and control the second application to call the second camera, so as to control the first application to collect a first image through the first camera and control the second application to collect a second image through the second camera.
In some embodiments, the display module is further configured to display the first interface and display the second image before the synthesizing module synthesizes the related information of the first image and the second image to obtain a third image, where the first interface includes the first image;
the image display device further comprises a receiving module;
the receiving module is used for receiving a second input of a user moving from the second image to the first image;
the synthesizing module is specifically configured to synthesize the related information of the first image and the second image in response to the second input received by the receiving module, so as to obtain a third image.
In some embodiments, the related information of the second image is an image element in the second image;
the receiving module is specifically configured to receive a second input of a user moving from a first area of the second image to a second area of the first image;
the synthesizing module is specifically configured to synthesize, in response to the second input, an image element in a second area of the second image to a first area in the first image, so as to obtain the third image.
In some embodiments, the image display apparatus further comprises a receiving module and a setting module;
the receiving module is used for receiving third input of a user to the first interface before the synthesizing module synthesizes the related information of the first image and the second image to obtain a third image, wherein the first interface comprises the image acquired by the first camera;
the display module is further used for responding to the third input received by the receiving module and displaying the identification of the first camera and the identification of the second camera;
the receiving module is further used for receiving a fourth input of the user on the identification of the first camera and the identification of the second camera;
the setting module is used for setting the transmission directions of the image acquired by the first camera and the image acquired by the second camera in response to the fourth input received by the receiving module;
the transmission direction is used for indicating the application of combining the image acquired by the first camera and the image acquired by the second camera and displaying the combined image.
In the image display device provided by the embodiment of the application, the first application can be controlled to acquire the first image through the first camera, the second application can be controlled to acquire the second image through the second camera, the related information of the first image and the second image is synthesized to obtain the third image, and the third image is displayed in the interface of the first application corresponding to the first camera, so that the first application can acquire the second image through the second camera without switching the called camera, and the synthesized image (namely the third image) comprising the related information of the second image is displayed in the first interface, and the flexibility of acquiring and displaying the images through the cameras can be improved.
The image display device in the embodiment of the application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image display device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 7, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 9, the embodiment of the present application further provides an electronic device 900, including a processor 901 and a memory 902, where a program or an instruction capable of being executed on the processor 901 is stored in the memory 902, and the program or the instruction when executed by the processor 901 implements each step of the embodiment of the image display method, and the steps can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
It should be noted that, the electronic device in the embodiment of the present application includes a mobile electronic device and a non-mobile electronic device.
Fig. 10 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: radio frequency unit 701, network module 702, audio output unit 703, input unit 704, sensor 705, display unit 706, user input unit 707, interface unit 708, memory 709, processor 710, and at least two cameras.
Those skilled in the art will appreciate that the electronic device 700 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 710 via a power management system so as to perform functions such as managing charge, discharge, and power consumption via the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 710 is configured to control a first application to acquire a first image through a first camera of the at least two cameras, and control a second application to acquire a second image through a second camera of the at least two cameras;
a processor 710, configured to synthesize information related to the first image and the second image to obtain a third image;
the display unit 706 is configured to display the synthesized third image in a first interface, where the first interface is a picture display interface corresponding to the first camera in the first application;
wherein the related information of the second image includes at least one of: the second image, the image elements in the second image, the recognition result of the second image.
In some embodiments, the user input unit 707 is configured to receive a first input from a user if the first application successfully invokes the first camera;
the processor 710 is specifically configured to, in response to the first input received by the user input unit 707, control the first application to keep calling the first camera, and control the second application to call the second camera, so as to control the first application to collect a first image through the first camera and control the second application to collect a second image through the second camera.
In some embodiments, the second application is the same as the first application;
the user input unit 707 is specifically configured to receive a first sub-input of the user to the first interface, where the first interface includes an image acquired by the first camera; and in response to the first sub-input, displaying at least one identifier in the first interface, each identifier indicating one of the at least two cameras; receiving a second sub-input of a first identifier of the at least one identifier by a user, wherein the first identifier indicates the second camera;
the processor 710 is specifically configured to respond to the second sub-input, control the first application to keep calling the first camera, and control the second application to call the second camera, so as to control the first application to collect a first image through the first camera and control the second application to collect a second image through the second camera.
In some embodiments, the display unit 706 is further configured to display the first interface and display the second image before the processor 710 synthesizes the related information of the first image and the second image to obtain a third image, where the first interface includes the first image;
The user input unit 707 is configured to receive a second input from a user moving from the second image to the first image;
the processor 710 is specifically configured to synthesize, in response to the second input received by the user input unit 707, related information of the first image and the second image, to obtain a third image.
In some embodiments, the related information of the second image is an image element in the second image; the user input unit 707 is specifically configured to receive a second input from a user moving from a first area of the second image to a second area of the first image;
the processor 710 is specifically configured to synthesize, in response to the second input, an image element in a second region of the second image to a first region in the first image, to obtain the third image.
In some embodiments, the user input unit 707 is configured to receive a third input from the user to the first interface before the processor 710 synthesizes the related information of the first image and the second image to obtain a third image, where the first interface includes an image acquired by the first camera;
The display unit 706 is further configured to display an identifier of the first camera and an identifier of the second camera in response to the third input received by the user input unit 707;
the user input unit 707 is further configured to receive a fourth input of a user identifier of the first camera and a fourth input of a user identifier of the second camera;
the processor 710 sets a transmission direction of the image collected by the first camera and the image collected by the second camera in response to the fourth input received by the user input unit 707;
the transmission direction is used for indicating the application of combining the image acquired by the first camera and the image acquired by the second camera and displaying the combined image.
In the electronic device provided by the embodiment of the application, the first application can be controlled to acquire the first image through the first camera, the second application can be controlled to acquire the second image through the second camera, the related information of the first image and the second image is synthesized to obtain the third image, and the third image is displayed in the interface of the first application corresponding to the first camera, so that the first application can acquire the second image through the second camera without switching the called camera, and the synthesized image (namely the third image) comprising the related information of the second image is displayed in the first interface, and the flexibility of acquiring and displaying the images through the cameras can be improved.
It should be appreciated that in embodiments of the present application, the input unit 704 may include a graphics processor (Graphics Processing Unit, GPU) 7041 and a microphone 7042, with the graphics processor 7041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes at least one of a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts, a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 709 may include volatile memory or nonvolatile memory, or the memory 709 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 709 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 710 may include one or more processing units; optionally, processor 710 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the processes of the embodiment of the image display method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the image display method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the image display method described above, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (14)

1. An image display method, characterized by being applied to an electronic device including at least two cameras, the method comprising:
controlling a first application to acquire a first image through a first camera of the at least two cameras, and controlling a second application to acquire a second image through a second camera of the at least two cameras;
combining the related information of the first image and the second image to obtain a third image;
displaying the third image in a first interface, wherein the first interface is a picture display interface corresponding to the first camera in the first application;
wherein the related information of the second image includes at least one of: the second image, the image elements in the second image, the recognition result of the second image.
2. The method of claim 1, wherein controlling a first application to capture a first image by a first camera of the at least two cameras and controlling a second application to capture a second image by a second camera of the at least two cameras comprises:
receiving a first input of a user under the condition that the first application successfully calls the first camera;
And responding to the first input, controlling the first application to keep calling the first camera, and controlling the second application to call the second camera so as to control the first application to acquire a first image through the first camera and control the second application to acquire a second image through the second camera.
3. The method of claim 2, wherein the second application is the same as the first application;
the receiving a first input from a user includes:
receiving a first sub-input of a user to the first interface, wherein the first interface comprises an image acquired by the first camera;
in response to the first sub-input, displaying at least one identifier in the first interface, each identifier indicating one of the at least two cameras;
receiving a second sub-input of a first identifier of the at least one identifier by a user, wherein the first identifier indicates the second camera;
the controlling the first application to keep calling the first camera and the controlling the second application to call the second camera in response to the first input to control the first application to collect a first image through the first camera and the second application to collect a second image through the second camera includes:
And responding to the second sub-input, controlling the first application to keep calling the first camera, and controlling the second application to call the second camera so as to control the first application to acquire a first image through the first camera and control the second application to acquire a second image through the second camera.
4. The method of claim 1, wherein the combining the related information of the first image and the second image to obtain a third image is preceded by:
displaying the first interface and displaying the second image, wherein the first interface comprises the first image;
the synthesizing the related information of the first image and the second image to obtain a third image includes:
receiving a second input of a user moving from the second image to the first image;
and responding to the second input, and combining the related information of the first image and the second image to obtain a third image.
5. The method of claim 4, wherein the information related to the second image is an image element in the second image;
the receiving a second input of a user moving from the second image to the first image, comprising:
Receiving a second input from a user moving from a first region of the second image to a second region of the first image;
the responding to the second input, synthesizing the related information of the first image and the second image to obtain a third image, comprising:
and in response to the second input, synthesizing image elements in a second area of the second image into a first area in the first image to obtain the third image.
6. The method of claim 1, wherein the combining the related information of the first image and the second image to obtain a third image is preceded by:
receiving a third input of a user to the first interface, wherein the first interface comprises an image acquired by the first camera;
responding to the third input, and displaying the identification of the first camera and the identification of the second camera;
receiving a fourth input of a user to the identification of the first camera and the identification of the second camera;
setting the transmission directions of the images acquired by the first camera and the images acquired by the second camera in response to the fourth input;
The transmission direction is used for indicating the application of combining the image acquired by the first camera and the image acquired by the second camera and displaying the combined image.
7. An image display device, the device comprising at least two cameras, the device further comprising:
the control module is used for controlling a first application to acquire a first image through a first camera of the at least two cameras and controlling a second application to acquire a second image through a second camera of the at least two cameras;
the synthesizing module is used for synthesizing the related information of the first image and the second image to obtain a third image;
the display module is used for displaying the third image synthesized by the synthesis module in a first interface, wherein the first interface is a picture display interface corresponding to the first camera in the first application;
wherein the related information of the second image includes at least one of: the second image, the image elements in the second image, the recognition result of the second image.
8. The apparatus of claim 7, further comprising a receiving module;
The receiving module is used for receiving a first input of a user under the condition that the first application successfully calls the first camera;
the control module is specifically configured to respond to the first input received by the receiving module, control the first application to keep calling the first camera, and control the second application to call the second camera, so as to control the first application to collect a first image through the first camera and control the second application to collect a second image through the second camera.
9. The apparatus of claim 8, wherein the second application is the same as the first application;
the receiving module is specifically configured to receive a first sub-input of a user to the first interface, where the first interface includes an image acquired by the first camera; and in response to the first sub-input, displaying at least one identifier in the first interface, each identifier indicating one of the at least two cameras; receiving a second sub-input of a first identifier of the at least one identifier by a user, wherein the first identifier indicates the second camera;
the control module is specifically configured to respond to the second sub-input, control the first application to keep calling the first camera, and control the second application to call the second camera, so as to control the first application to collect a first image through the first camera and control the second application to collect a second image through the second camera.
10. The apparatus of claim 7, wherein the display module is further configured to display the first interface and display the second image before the synthesizing module synthesizes the related information of the first image and the second image to obtain a third image, where the first interface includes the first image;
the apparatus further comprises a receiving module;
the receiving module is used for receiving a second input of a user moving from the second image to the first image;
the synthesizing module is specifically configured to synthesize the related information of the first image and the second image in response to the second input received by the receiving module, so as to obtain a third image.
11. The apparatus of claim 10, wherein the information related to the second image is an image element in the second image;
the receiving module is specifically configured to receive a second input of a user moving from a first area of the second image to a second area of the first image;
the synthesizing module is specifically configured to synthesize, in response to the second input, an image element in a second area of the second image to a first area in the first image, so as to obtain the third image.
12. The apparatus of claim 7, further comprising a receiving module and a setting module;
the receiving module is used for receiving third input of a user to the first interface before the synthesizing module synthesizes the related information of the first image and the second image to obtain a third image, wherein the first interface comprises the image acquired by the first camera;
the display module is further used for responding to the third input received by the receiving module and displaying the identification of the first camera and the identification of the second camera;
the receiving module is further used for receiving a fourth input of the user on the identification of the first camera and the identification of the second camera;
the setting module is used for setting the transmission directions of the image acquired by the first camera and the image acquired by the second camera in response to the fourth input received by the receiving module;
the transmission direction is used for indicating the application of combining the image acquired by the first camera and the image acquired by the second camera and displaying the combined image.
13. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the image display method of any one of claims 1-6.
14. A readable storage medium, wherein a program or instructions is stored on the readable storage medium, which when executed by a processor, implements the steps of the image display method according to any one of claims 1-6.
CN202311740930.6A 2023-12-15 2023-12-15 Image display method, device, electronic equipment and medium Pending CN117750177A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311740930.6A CN117750177A (en) 2023-12-15 2023-12-15 Image display method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311740930.6A CN117750177A (en) 2023-12-15 2023-12-15 Image display method, device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117750177A true CN117750177A (en) 2024-03-22

Family

ID=90250263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311740930.6A Pending CN117750177A (en) 2023-12-15 2023-12-15 Image display method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117750177A (en)

Similar Documents

Publication Publication Date Title
CN109683761B (en) Content collection method, device and storage medium
CN111612873B (en) GIF picture generation method and device and electronic equipment
CN108108079B (en) Icon display processing method and mobile terminal
CN112911147B (en) Display control method, display control device and electronic equipment
CN111722775A (en) Image processing method, device, equipment and readable storage medium
CN111597370A (en) Shooting method and electronic equipment
CN111159449A (en) Image display method and electronic equipment
CN110086998B (en) Shooting method and terminal
CN111866379A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115499589A (en) Shooting method, shooting device, electronic equipment and medium
CN112202958B (en) Screenshot method and device and electronic equipment
CN117750177A (en) Image display method, device, electronic equipment and medium
CN114390206A (en) Shooting method and device and electronic equipment
CN114584704A (en) Shooting method and device and electronic equipment
CN117729415A (en) Image processing method, device, electronic equipment and medium
CN114629869B (en) Information generation method, device, electronic equipment and storage medium
CN113923367B (en) Shooting method and shooting device
CN112367562B (en) Image processing method and device and electronic equipment
CN115967854A (en) Photographing method and device and electronic equipment
CN114286002A (en) Image processing circuit, method and device, electronic equipment and chip
CN116916147A (en) Image processing method, image sending device and electronic equipment
CN117149020A (en) Information management method, device, equipment and medium
CN116847187A (en) Shooting method, shooting device, electronic equipment and storage medium
CN115631109A (en) Image processing method, image processing device and electronic equipment
CN114745504A (en) Shooting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination