CN114339050A - Display method and device and electronic equipment - Google Patents

Display method and device and electronic equipment Download PDF

Info

Publication number
CN114339050A
CN114339050A CN202111674460.9A CN202111674460A CN114339050A CN 114339050 A CN114339050 A CN 114339050A CN 202111674460 A CN202111674460 A CN 202111674460A CN 114339050 A CN114339050 A CN 114339050A
Authority
CN
China
Prior art keywords
target
image
display
camera
display area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111674460.9A
Other languages
Chinese (zh)
Other versions
CN114339050B (en
Inventor
黎小松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Weiwo Software Technology Co ltd
Original Assignee
Xi'an Weiwo Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Weiwo Software Technology Co ltd filed Critical Xi'an Weiwo Software Technology Co ltd
Priority to CN202111674460.9A priority Critical patent/CN114339050B/en
Publication of CN114339050A publication Critical patent/CN114339050A/en
Application granted granted Critical
Publication of CN114339050B publication Critical patent/CN114339050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a display method, a display device and electronic equipment, and belongs to the technical field of computers. The method comprises the following steps: respectively displaying images shot by a plurality of cameras in a plurality of display areas of a screen; receiving a first input of a target object included in a target image among a plurality of images; responding to the first input, and performing object recognition on an image shot by a target camera, wherein the target camera is a camera for shooting the target image; and displaying the target object in a target display area corresponding to the target camera under the condition that the image shot by the target camera is recognized to comprise the target object.

Description

Display method and device and electronic equipment
Technical Field
The application belongs to the technical field of computers, and particularly relates to a display method and device and electronic equipment.
Background
With the trend of terminals towards multiple cameras, more and more scenes such as a double-view video recording scene and the like are required to simultaneously display images shot by multiple cameras.
At present, a terminal usually displays only a central area of an image shot by each camera in a plurality of cameras on a display screen, so as to ensure that an effect of simultaneously displaying the image shot by each camera is achieved on a limited display area of the display screen. However, since the area of the captured image displayed by the terminal is fixed, the viewing object required by the user may not be displayed because the viewing object is not located in the central area of the captured image, and thus the display effect of the captured image in the capturing process of the terminal is poor.
Disclosure of Invention
The embodiment of the application aims to provide a display method, a display device and electronic equipment, and the problem that the display effect of a shot image in the shooting process of a terminal is poor can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a display method, where the method includes:
respectively displaying images shot by a plurality of cameras in a plurality of display areas of a screen;
receiving a first input of a target object included in a target image among a plurality of images;
responding to the first input, and performing object recognition on an image shot by a target camera, wherein the target camera is a camera for shooting the target image;
and displaying the target object in a target display area corresponding to the target camera under the condition that the image shot by the target camera is recognized to comprise the target object.
Optionally, the method further includes:
cutting the image of the target camera to obtain a display image, wherein the display image comprises the target object;
the displaying the target object in a target display area corresponding to the target camera includes: displaying the display image in the target display area.
Optionally, the cutting the image of the target camera to obtain a display image includes:
cutting the image shot by the target camera along the boundary line of the circumscribed rectangle of the target object to obtain a preliminary display image;
and adjusting the size of the preliminary display image to the size of the target display area to obtain the display image.
Optionally, the cutting the image of the target camera to obtain a display image includes:
cutting the image shot by the target camera along the boundary line of the circumscribed rectangle of the target object to obtain the display image;
the displaying the display image in the target display area includes: and adjusting the size of the target display area according to the size of the circumscribed rectangle, and displaying the display image in the center of the adjusted target display area.
Optionally, the cutting the image of the target camera to obtain a display image includes:
and cutting the image shot by the target camera along the edge contour of the target object to obtain a display image.
Optionally, the displaying an image in the target display area, where there is a boundary line in the display area, includes: and adjusting the size of the target display area to the size of the display image, displaying the display image in the adjusted target display area, and hiding the boundary line of the target display area.
Optionally, the method further includes:
receiving a second input of the target object displayed by the target display area;
and in response to the second input, stopping object recognition of the image shot by the target camera, and displaying the image shot by the target camera in the target display area.
Optionally, the plurality of display areas are at least partially overlapped, or the plurality of display areas are arranged in an array.
In a third aspect, an embodiment of the present application provides a display device, including:
the display module is used for respectively displaying images shot by the cameras in a plurality of display areas of the screen;
a receiving module for receiving a first input of a target object included in a target image among the plurality of images;
the identification module is used for responding to the first input and carrying out object identification on the image shot by a target camera, wherein the target camera is a camera for shooting the target image;
the display module is further configured to display the target object in a target display area corresponding to the target camera when it is recognized that the image captured by the target camera includes the target object.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, in the case that the plurality of display areas of the screen respectively display the images shot by the plurality of cameras, by receiving a first input of a target object included in a target image in the plurality of images, object recognition can be performed on the image shot by the target camera shooting the target image in response to the first input, so that the target object is displayed in the target display area corresponding to the target camera in the case that the image shot by the target camera includes the target object. According to the technical scheme, after the user selects the target object needing to be framed through the first input, the terminal can display the target object in the target display area under the condition that the image shot by the target camera comprises the target object, and the target object is tracked and displayed by the target camera. Compared with the prior art, the problem that the view object required by the user cannot be displayed because the view object is not located in the fixed area where the terminal can display the shot image is avoided, the display effect of the terminal on the shot image in the shooting process is improved, and the user experience of simultaneously using a plurality of cameras for shooting is improved.
Drawings
Fig. 1 is a flowchart of a display method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of display areas of a terminal according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of display areas of a terminal according to an embodiment of the present application;
fig. 4 is a schematic diagram of display areas of a terminal according to an embodiment of the present application;
FIG. 5 is a flow chart of another display method provided by the embodiments of the present application;
fig. 6 is a schematic diagram of display areas of a terminal according to an embodiment of the present application;
fig. 7 is a block diagram of another display device provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship. In the description and in the claims "a plurality" means at least two.
The display method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Please refer to fig. 1, which shows a flowchart of a display method according to an embodiment of the present application. The display method may be applied to an electronic device. Alternatively, the electronic device may be a terminal. The terminal can be a mobile phone, a computer, a wearable device or the like with a screen and a plurality of cameras. Wherein, a plurality of cameras can include main camera and at least one vice camera, and this vice camera can include following at least one: wide-angle camera, super wide-angle camera, leading camera. The embodiment of the present application takes an example in which the display method is applied to a terminal as an example. As shown in fig. 1, the display method may include:
and step 101, displaying the images shot by the plurality of cameras in a plurality of display areas of the screen respectively.
In the embodiment of the present application, the plurality of display areas refer to at least two display areas, and the plurality of cameras refer to at least two cameras. When a user wants to shoot by using the terminal, the user can execute setting input aiming at the shooting control displayed on the screen, so that the terminal responds to the setting input after receiving the setting input aiming at the shooting control and enters a shooting mode, wherein the shooting mode comprises a video recording mode, a shooting mode and the like. The terminal respectively displays images shot by the plurality of cameras in a plurality of display areas of a screen of the terminal. The setting input for the shooting control can include the input of the types of clicking, long pressing, sliding, hovering gesture or voice input for the shooting control.
And one display area is used for displaying the image shot by one camera. The image captured by the camera can refer to a complete image within the range of the camera angle of view, or a partial image in the complete image captured by the camera. In an alternative implementation, in the case that the image displayed in the display area is a partial image of the complete image, the terminal may receive a fifth input in the display area, and in response to the fifth input, control the display area to display contents at different positions in the complete image. By way of example, the fifth input may be a slide input. The terminal may control the full image to move by the sliding length in the sliding manner within the display area based on the sliding direction and the sliding length of the fifth input, so that the display area displays the content of the full image within the display area after the movement is terminated.
Alternatively, the plurality of display regions may be at least partially overlapped, or the plurality of display regions may be arranged. For the case that the plurality of display areas are at least partially overlapped, as a first example, the display area corresponding to the main camera of the terminal may be all the display areas of the display screen, and the display areas corresponding to the other cameras may be overlapped and arranged on the display area corresponding to the main camera.
For a second example, it is assumed that the terminal includes a rear main camera and a rear wide-angle camera, and images captured by the rear main camera and the rear wide-angle camera are all butterflies. As shown in fig. 2, the entire display area 201 of the screen of the terminal is a display area corresponding to the rear main camera, and a butterfly is displayed in the entire display area 201. The partial display area 202 of the screen is a display area corresponding to the shooting of the rear wide-angle camera, and a butterfly is displayed in the partial display area 202.
As a third example, as shown in fig. 3, the terminal includes three display areas for displaying an image captured by the camera, which are an entire display area 301, a first partial display area 302, and a second partial display area 303 of the display screen, respectively. Assume that the terminal includes a front main camera, a front wide-angle camera, and a rear main camera. And the images shot by the front main camera and the front wide-angle camera are all butterflies, and the images shot by the rear main camera are two children. The display area corresponding to the rear main camera is the whole display area 301 of the display screen, and two children are displayed in the whole display area 301. The display area corresponding to the front main camera is a first partial display area 302. The display area corresponding to the front wide-angle camera is the second partial display area 303. The first partial display area 302 and the second partial display area 303 each display a butterfly.
For the case where a plurality of display regions are arranged, it is assumed as an example that the plurality of cameras include: the camera comprises a main camera, a rear wide-angle camera and a front camera. In the case where a plurality of display regions are arranged, as shown in fig. 4, a first display region 401 corresponding to the rear wide-angle camera and a second display region 402 corresponding to the front camera may be arranged in parallel, and a display region 403 corresponding to the main camera may be arranged perpendicular to the first display region and the second display region. The image shot by the front camera is assumed to be a human face image, and the images shot by the rear main camera and the rear wide-angle camera are all butterflies. The first display area 401 of the terminal displays a face image, and the second display area 402 and the third display area 403 all display butterflies.
Step 102, receiving a first input of a target object included in a target image of a plurality of images.
In the embodiment of the application, if a user wants to select a target object in a target image shot by a target camera to make a scene, the user can start a target object tracking function of the terminal. The user can execute a first input on the target object of the target image in the target display area corresponding to the target camera, so that the terminal receives the first input on the target object of the target image, and then the terminal tracks and displays the target object in response to the first input. Alternatively, the first input of the target object included in the target image may include: and inputting the target object by clicking, long pressing, sliding, hanging gestures or voice input.
And 103, responding to the first input, and performing object recognition on the image shot by the target camera, wherein the target camera is the camera for shooting the target image.
In the embodiment of the application, after receiving a first input of a target object for a target image, a terminal can perform object recognition on each frame of image shot by a target camera in response to the first input, and determine whether the image shot by the target camera includes the target object.
Optionally, when the object is identified on the image captured by the target camera, a Local Feature Analysis (LFA) method or a neural network method may be used. For example, in the case of performing object recognition on an image captured by a target camera by using a neural network method, the terminal may input the image captured by the target camera to an object recognition model, and obtain an image recognition result, where the image recognition result includes each object included in the image captured by the target camera. The terminal judges whether each article object comprises a target object. When a target object exists in each article object, determining that an image shot by a target camera comprises the target object; when the target object does not exist in the article objects, the image shot by the target camera is determined to be recognized not to include the target object.
And 104, displaying the target object in a target display area corresponding to the target camera under the condition that the image shot by the target camera is recognized to include the target object.
In the embodiment of the application, when it is recognized that the image shot by the target camera does not include the target object, the scene which the user wants to select is indicated, that is, the target object is not in the framing range of the target camera, and the terminal directly displays the image shot by the target camera in the target display area. The image may refer to an image within a range of angles of view taken by the target camera.
The terminal may stop performing object recognition on the image captured by the target camera at a set timing, and directly display the image captured by the target camera in the target display area to end the tracking display of the target object.
Optionally, the setting the timing may include: the terminal recognizes that the image shot by the target camera does not include the target object, or the terminal receives a second input of the target object displayed in the target display area. Wherein the second input to the target object displayed by the target display area may include: and inputting the target object by clicking, long pressing, sliding, hanging gestures or voice input.
For example, if the user wants to end the selection of the target object for the scene, the user may terminate the target object tracking function of the terminal. The user may click on the target object displayed in the target display area. And after receiving the click input in the target display area, the terminal responds to the click input, stops carrying out object identification on the image shot by the target camera, and directly displays the image shot by the target camera in the target display area.
In the embodiment of the application, in the case that the plurality of display areas of the screen respectively display the images shot by the plurality of cameras, by receiving a first input of a target object included in a target image in the plurality of images, object recognition can be performed on the image shot by the target camera shooting the target image in response to the first input, so that the target object is displayed in the target display area corresponding to the target camera in the case that the image shot by the target camera includes the target object. According to the technical scheme, after the user selects the target object needing to be framed through the first input, the terminal can display the target object in the target display area under the condition that the image shot by the target camera comprises the target object, and the target object is tracked and displayed by the target camera. Compared with the prior art, the problem that the view object required by the user cannot be displayed because the view object is not located in the fixed area where the terminal can display the shot image is avoided, the display effect of the terminal on the shot image in the shooting process is improved, and the user experience of simultaneously using a plurality of cameras for shooting is improved.
Please refer to fig. 5, which shows a flowchart of another display method provided in the embodiment of the present application. The display method may be applied to an electronic device. Alternatively, the electronic device may be a terminal. The terminal can be a mobile phone, a computer, a wearable device or the like with a screen and a plurality of cameras. Wherein, a plurality of cameras can include main camera and at least one vice camera, and this vice camera can include following at least one: wide-angle camera, super wide-angle camera, leading camera. The embodiment of the present application takes an example in which the display method is applied to a terminal as an example. As shown in fig. 5, the display method may include:
and step 501, displaying the images shot by the plurality of cameras in a plurality of display areas of the screen respectively.
The explanation and implementation of step 501 may refer to the explanation and implementation of step 101, which is not described in detail in this embodiment of the present application.
Step 502, receiving a first input of a target object comprised by a target image of a plurality of images.
The explanation and implementation of step 502 may refer to the explanation and implementation of step 102, which is not described in detail in this embodiment of the present application.
And step 503, responding to the first input, and performing object recognition on the image shot by the target camera, wherein the target camera is the camera for shooting the target image.
The explanation and implementation of step 503 may refer to the explanation and implementation of step 103, which is not described in detail in this embodiment of the present application.
And step 504, under the condition that the image shot by the target camera comprises the target object, cutting the image of the target camera to obtain a display image, wherein the display image comprises the target object.
In the embodiment of the application, when the situation that the image shot by the target camera comprises the target object is identified, the fact that the scenery which the user wants to select is indicated, namely the target object is in the framing range of the target camera, the terminal cuts the image of the target camera to obtain the display image comprising the target object. In this way, since the image of the target camera is cropped while the target object is retained, the size of the image for display in the target display area corresponding to the target camera is reduced. Therefore, under the condition that the size of the target display area is not changed, the definition of the display image is improved, and the shooting experience of a user is improved.
Optionally, the implementation manner of the terminal cutting the image of the target camera to obtain the display image may be various, and the following two examples are used in the embodiment of the present application for description.
In a first optional implementation manner, the process of cutting the image of the target camera by the terminal to obtain the display image may include: and the terminal cuts the image shot by the target camera along the boundary line of the circumscribed rectangle of the target object to obtain a preliminary display image. And adjusting the size of the preliminary display image to the size of the target display area to obtain a display image.
Optionally, the terminal may identify position information of a boundary line of a circumscribed rectangle of the target object in an image captured by the target camera. And the terminal cuts the image shot by the target camera along the circumscribed rectangle of the target object according to the position information to obtain a preliminary display image. The terminal may acquire size information of the target display area, which may include a target length and a target width of the target display area. The terminal can zoom the preliminary display image until the size of the preliminary display image is adjusted to the size of the target display area by taking the central point of the preliminary display image as a fixed point according to the size information of the target display area to obtain the display image.
For example, the bounding rectangle of the target object may be the smallest bounding rectangle of the target object. When the terminal identifies the object from the image of the target camera, the vertex coordinates of two opposite angles of the minimum circumscribed rectangle of each object can be obtained. And the terminal calculates the position information of the boundary line of the minimum external rectangle according to the vertex coordinates of two opposite angles of the minimum external rectangle of the target object. And the terminal cuts the image shot by the target camera along the circumscribed rectangle of the target object according to the position information to obtain a preliminary display image. The terminal adjusts the size of the preliminary display image to the size of the target display area to obtain a display image
For example, with continued reference to fig. 2, the target camera is a rear wide-angle camera, and the target object is a butterfly shot by the rear wide-angle camera. The circumscribed rectangle of the target object is rectangle a. And the terminal cuts the image shot by the target camera along the boundary line of the circumscribed rectangle A of the target object to obtain a preliminary display image. And adjusting the size of the preliminary display image to the size of the target display area to obtain the display image.
In a second optional implementation manner, the process of cutting the image of the target camera by the terminal to obtain the display image may include: and the terminal cuts the image shot by the target camera along the boundary line of the circumscribed rectangle of the target object to obtain a display image.
Optionally, the terminal may identify position information of a boundary line of a circumscribed rectangle of the target object in an image captured by the target camera. And the terminal cuts the image shot by the target camera along the circumscribed rectangle of the target object according to the position information to obtain a display image. For example, the bounding rectangle of the target object may be the smallest bounding rectangle of the target object.
For example, continuing to refer to fig. 4, the target camera is a rear wide-angle camera, and the target object is a butterfly shot by the rear wide-angle camera. The circumscribed rectangle of the target object is rectangle a. And the terminal cuts the image shot by the target camera along the boundary line of the circumscribed rectangle A of the target object to obtain a display image.
In a third optional implementation manner, the process of the terminal cutting the image of the target camera to obtain the display image may include: and the terminal cuts the image shot by the target camera along the edge contour of the target object to obtain a display image.
In the embodiment of the application, the terminal can identify the position information of the edge contour line of the target object in the image shot by the target camera. And the terminal cuts the image shot by the target camera according to the position information and the edge contour of the target object to obtain a display image. Illustratively, the terminal retains a partial image included in the position information in the image captured by the target camera, and cuts out a portion of the image of the target camera other than the partial image. Optionally, the position information of the terminal in identifying the edge contour line of the target object may use opencv, which is an open-source distributed cross-platform computer vision and machine learning software library, and may run on each operating system. For example, the algorithm for recognizing the position information of the edge contour line of the target object, which is adopted by the terminal, may include: a canny edge algorithm, a sobel edge algorithm, or a laplacian edge algorithm. For example, referring to fig. 3, the target cameras are a front main camera and a front wide-angle camera. The target object is a butterfly shot by the front main camera and the front wide-angle camera. And the terminal cuts the image shot by the target camera along the edge contour of the butterfly to obtain a display image.
And 505, displaying a display image in a target display area corresponding to the target camera.
In an optional implementation manner, for three implementation manners in which the terminal crops the image of the target camera to obtain the display image in step 504, the terminal may directly display the obtained display image in the target display area corresponding to the target camera.
For example, referring to fig. 2, the terminal rear wide-angle camera in fig. 2 captures a display image that is displayed in the corresponding display area 202 and is adjusted to the size of the target display area.
In another optional implementation manner, a second implementation manner that the terminal crops the image of the target camera in step 504 to obtain a display image is provided. The process of displaying the display image in the target display area corresponding to the target camera by the terminal may include: and the terminal adjusts the size of the target display area according to the size of the circumscribed rectangle, and displays a display image in the center of the adjusted target display area.
Optionally, the terminal may zoom the target display area until the size of the target display area is adjusted to the size of the circumscribed rectangle by using the central point of the target display area as a fixed point according to the size of the circumscribed rectangle, and display the display image in the center of the adjusted target display area. Or, the terminal may zoom the target display area with the upper left corner of the target display area as a fixed point according to the size of the circumscribed rectangle until the size of the target display area is adjusted to the size of the circumscribed rectangle, and display the display image in the center of the adjusted target display area.
For example, referring to fig. 4, the terminal adjusts the size of the first display area 401 corresponding to the rear wide-angle camera according to the size of the circumscribed rectangle a of the target object, and displays a display image at the center of the adjusted target display area, where it is to be noted that the rectangle a indicated by the dashed frame in the embodiment of the present application is not displayed in practical application, and the dashed frame in fig. 2 and 4 is only used to schematically illustrate the circumscribed rectangle of the target object.
In the embodiment of the application, each display area of the terminal can be displayed with a boundary line, so that a user can distinguish the display areas corresponding to different cameras more clearly. Optionally, for a third implementation manner in which the terminal crops the image of the target camera to obtain the display image in step 504, a process of the terminal displaying the display image in the target display area may include: and the terminal adjusts the size of the target display area to the size of the display image, displays the display image in the adjusted target display area and hides the boundary line of the target display area.
Optionally, the process of adjusting the size of the target display area to the size of the display image by the terminal may include: the terminal can obtain the position information of the edge contour of the display image, and the edge contour of the target display area is adjusted to be overlapped with the edge contour of the display image according to the position information to obtain the adjusted target display area. In a third implementation manner in which the terminal cuts the image of the target camera to obtain the display image, the implementation manner in which the terminal identifies the position information of the edge contour line of the target object in the image captured by the target camera may be referred to, and this is not described in detail in this embodiment of the present application. It should be noted that, since the target object is in a moving state, the position of the adjusted target display area may move following the target object for different images. The position of the target display area is not fixed.
Optionally, the process of hiding the boundary line of the target display area by the terminal may include: the terminal adjusts the alpha value of each pixel on the boundary line of the target display area to 0. For example, the terminal may detect an edge contour of the target display area, that is, position information of the boundary line, and adjust an alpha value of each pixel in the position information to 0.
In this way, the display image obtained by cutting the image captured by the target camera along the edge contour of the target object only includes the target object. Therefore, the boundary line of the target display area is hidden, the position of the selected target object in the final shot image can be better shown, the effect of the final shot image after the target object is fused is achieved, the display effect of the terminal shot image is improved, and the user experience is improved. Further, since the size of the target object is smaller than the size of the entire image, the size of the display image is smaller while ensuring the display image definition. In this way, when the plurality of display regions are at least partially overlapped, the overlapping portion between the displayed image and the other display region is small, and the shielding of the other display region is reduced. The problem that the display area that a plurality of cameras correspond shelters from each other has been improved to a certain extent, has improved the terminal and has shot the display effect of image at the shooting in-process, has promoted the user experience who uses a plurality of cameras to shoot simultaneously.
Illustratively, the user performs a first input to the butterflies displayed in the first partial display area 302 and the second partial display area 303 of the terminal shown in fig. 3. And the terminal responds to the first input after receiving the first input and performs object identification on the images shot by the front main camera and the front wide-angle camera. And under the condition that the images shot by the front main camera and the front wide-angle camera comprise a butterfly, the terminal cuts the images shot by the front main camera and the front wide-angle camera along the edge contour of the butterfly to obtain a display image. The terminal adjusts the size of the first partial display area 302 to the size of a display image cut along the edge contour for an image captured by the front main camera, displays the display image in the adjusted first partial display area 302, and hides the boundary line of the first partial display area 302. The terminal adjusts the size of the second partial display area 303 to the size of a display image cut along the edge profile with respect to the image captured by the front wide-angle camera, displays the display image in the adjusted second partial display area 303, and hides the boundary line of the second partial display area 303. In this case, the display interface of the terminal is as shown in fig. 6, and in fig. 6, the terminal displays the butterfly cut out in the first partial display area 302 and the second partial display area 303, and displays two children in the entire display area 301. The dotted line in fig. 6 indicates the position of the boundary line, and the dotted line in fig. 6 is not shown in the actual implementation.
Step 506, receiving a second input of the target object displayed in the target display area.
In the embodiment of the application, if the user wants to finish selecting the target object as the scene, the user can terminate the target object tracking function of the terminal. The user may perform a second input on the target image in the target display area to cause the terminal to receive the second input of the target object in the target display area, and subsequently stop the tracking display of the target object in response to the second input. Optionally, the second input of the target object displayed in the target display area may include: and inputting the target object by clicking, long pressing, sliding, hanging gestures or voice input.
And 507, responding to the second input, stopping object recognition on the image shot by the target camera, and displaying the image shot by the target camera in the target display area.
In the embodiment of the application, after receiving a second input of the target object displayed in the target display area, the terminal responds to the second input, stops performing object identification on the image shot by the target camera, and directly displays the image shot by the target camera in the target display area. For example, if the user wants to end the selection of the target object for the scene, the user may terminate the target object tracking function of the terminal. The user may click on the target object displayed in the target display area. And after receiving the click input in the target display area, the terminal responds to the click input, stops carrying out object identification on the image shot by the target camera, and directly displays the image shot by the target camera in the target display area.
Optionally, the size and/or position of each display area of the terminal in the embodiment of the present application may be changed. Therefore, the user can conveniently and reasonably adjust the camera according to the self requirement, images shot by the cameras can be flexibly displayed, and the shooting experience of the user is improved.
In an optional implementation, the method further includes: the terminal receives a third input to the boundary line of the display area. In response to a third input, controlling altering a size of the display area based on the third input.
In this embodiment, if the user wants to adjust the size of a certain display area, a third input may be performed on the boundary line of the display area, so that the terminal, after receiving the third input, controls to change the size of the display area based on the third input in response to the third input. Alternatively, the third input for the boundary line may be a click, long press, swipe, hover gesture, or voice input for the boundary line.
For example, in a case where the third input is a click input, the third input may include a single click input or a double click input. The process of the terminal controlling the change of the size of the display area based on the third input may include: and in the case that the third input is click input, the length and the width of the terminal control display area are both increased by the set length. In the case where the third input is a double-click input, the length and width of the terminal control display area are both reduced by a set length.
As another example, the third input is a slide input. The process of the terminal controlling the change of the size of the display area based on the third input may include: and the terminal controls and changes the size of the display area according to the sliding direction and the sliding distance of the sliding input. When the sliding mode is the direction far away from the center point of the display area, the length and the width of the display area are controlled by the terminal to increase the sliding distance. And when the sliding mode is the direction close to the center point of the display area, the terminal controls the length and the width of the display area to reduce the sliding distance.
In an optional implementation, the method further includes: a fourth input to the display area is received. In response to a fourth input, the display area is controlled to move based on the fourth input.
In this embodiment, if the user wants to move the position of a certain display area, a fourth input may be performed on the display area, so that the terminal, after receiving the fourth third input, controls the display area to move based on the fourth input in response to the fourth input. Optionally, the fourth input to the display area may be a click, long press, swipe, hover gesture, or voice input to the display area. Illustratively, the fourth input is different from both the first input and the second input.
For example, in a case where the fourth input is a click input, the process of the terminal controlling the movement of the display area based on the fourth input may include: and the terminal controls the display area to move to the set mode by the set distance.
For another example, in a case where the fourth input is a slide input, the process of the terminal controlling the movement of the display area based on the fourth input may include: the terminal controls the display area to move a sliding distance in a direction parallel to a sliding direction of the sliding input.
In the embodiment of the application, in the case that the plurality of display areas of the screen respectively display the images shot by the plurality of cameras, by receiving a first input of a target object included in a target image in the plurality of images, object recognition can be performed on the image shot by the target camera shooting the target image in response to the first input, so that in the case that the image shot by the target camera includes the target object, the target object is displayed in the target display area corresponding to the target camera. According to the technical scheme, after the user selects the target object needing to be framed through the first input, the terminal can display the target object in the target display area under the condition that the image shot by the target camera comprises the target object, and the target object is tracked and displayed by the target camera. Compared with the prior art, the problem that the view object required by the user cannot be displayed because the view object is not located in the fixed area where the terminal can display the shot image is avoided, the display effect of the terminal on the shot image in the shooting process is improved, and the user experience of simultaneously using a plurality of cameras for shooting is improved.
In the display method provided in the embodiment of the present application, the execution main body may be a display device, or a control module for executing the display method in the display device. In the embodiment of the present application, a display device executing a display method is taken as an example, and the display device provided in the embodiment of the present application is described.
Referring to fig. 7, a block diagram of a display device according to an embodiment of the present application is shown. As shown in fig. 7, the display device 700 includes: a display module 701, a receiving module 702 and an identification module 703.
A display module 701, configured to display images captured by multiple cameras in multiple display areas of a screen respectively;
a receiving module 702, configured to receive a first input of a target object included in a target image in a plurality of images;
the recognition module 703 is configured to perform object recognition on an image captured by a target camera in response to a first input, where the target camera is a camera capturing a target image;
the display module 701 is further configured to display the target object in a target display area corresponding to the target camera if it is recognized that the image captured by the target camera includes the target object.
Optionally, the display device 700 further includes: and (5) a cutting module.
And the cutting module is used for cutting the image of the target camera to obtain a display image, and the display image comprises a target object.
The display module 701 is further configured to display a display image in the target display area.
Optionally, the clipping module is further configured to:
cutting an image shot by a target camera along a boundary line of a circumscribed rectangle of a target object to obtain a primary display image;
and adjusting the size of the preliminary display image to the size of the target display area to obtain a display image.
Optionally, the clipping module is further configured to: and cutting the image shot by the target camera along the boundary line of the circumscribed rectangle of the target object to obtain a display image.
The display module 701 is further configured to adjust the size of the target display area according to the size of the circumscribed rectangle, and display a display image in the center of the adjusted target display area.
Optionally, the display area has a boundary line, and the cropping module is further configured to: and cutting the image shot by the target camera along the edge contour of the target object to obtain a display image.
Optionally, the display area has a boundary line, and the display module 701 is further configured to: the size of the target display area is adjusted to the size of the display image, the display image is displayed in the adjusted target display area, and the boundary line of the target display area is hidden.
Optionally, the receiving module 702 is further configured to receive a second input of the target object displayed in the target display area.
The device still includes: and the stopping processing module is used for responding to the second input, stopping object recognition on the image shot by the target camera and displaying the image shot by the target camera in the target display area.
Optionally, the plurality of display regions are at least partially overlapped, or the plurality of display regions are arranged in an array.
In summary, the display device provided in the embodiment of the present application, in a case where images captured by a plurality of cameras are respectively displayed in a plurality of display areas of a screen, by receiving a first input to a target object included in a target image of the plurality of images, makes it possible to perform object recognition on an image captured by the target camera capturing the target image in response to the first input, and thus, in a case where the image captured by the target camera includes the target object, the target object is displayed in a target display area corresponding to the target camera. According to the technical scheme, after the user selects the target object needing to be framed through the first input, the terminal can display the target object in the target display area under the condition that the image shot by the target camera comprises the target object, and the target object is tracked and displayed by the target camera. Compared with the prior art, the problem that the view object required by the user cannot be displayed because the view object is not located in the fixed area where the terminal can display the shot image is avoided, the display effect of the terminal on the shot image in the shooting process is improved, and the user experience of simultaneously using a plurality of cameras for shooting is improved.
The display device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The display device provided in the embodiment of the present application can implement each process implemented by the method embodiment of fig. 1 or fig. 5, and is not described here again to avoid repetition.
Optionally, as shown in fig. 8, an electronic device 800 is further provided in this embodiment of the present application, and includes a processor 801, a memory 802, and a program or an instruction stored in the memory 802 and executable on the processor 801, where the program or the instruction is executed by the processor 801 to implement each process of the display method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application. The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, and a processor 910.
Those skilled in the art will appreciate that the electronic device 900 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The display unit 906 is configured to display images captured by the plurality of cameras in a plurality of display areas of the screen, respectively.
A processor 910 is configured to receive a first input of a target object included in a target image of a plurality of images. And the object recognition module is used for responding to the first input and performing object recognition on the image shot by the target camera, wherein the target camera is the camera for shooting the target image.
A display unit 906, configured to display the target object in a target display area corresponding to the target camera if it is recognized that the image captured by the target camera includes the target object.
In the embodiment of the application, in the case that the plurality of display areas of the screen respectively display the images shot by the plurality of cameras, by receiving a first input of a target object included in a target image in the plurality of images, object recognition can be performed on the image shot by the target camera shooting the target image in response to the first input, so that the target object is displayed in the target display area corresponding to the target camera in the case that the image shot by the target camera includes the target object. According to the technical scheme, after the user selects the target object needing to be framed through the first input, the terminal can display the target object in the target display area under the condition that the image shot by the target camera comprises the target object, and the target object is tracked and displayed by the target camera. Compared with the prior art, the problem that the view object required by the user cannot be displayed because the view object is not located in the fixed area where the terminal can display the shot image is avoided, the display effect of the terminal on the shot image in the shooting process is improved, and the user experience of simultaneously using a plurality of cameras for shooting is improved.
Optionally, the processor 910 is further configured to crop an image of the target camera to obtain a display image, where the display image includes the target object.
A display unit 906 further configured to display the display image in the target display area.
Optionally, the processor 910 is further configured to cut the image captured by the target camera along a boundary line of a circumscribed rectangle of the target object to obtain a preliminary display image; and adjusting the size of the preliminary display image to the size of the target display area to obtain the display image.
Optionally, the processor 910 is further configured to cut the image captured by the target camera along a boundary line of a circumscribed rectangle of the target object, so as to obtain the display image.
The display unit 906 is further configured to adjust the size of the target display area according to the size of the circumscribed rectangle, and display the display image in the center of the adjusted target display area.
Optionally, the processor 910 is further configured to cut the image captured by the target camera along an edge contour of the target object to obtain a display image.
Optionally, the display area has a boundary line, and the display unit 906 is further configured to adjust the size of the target display area to the size of the display image, display the display image in the adjusted target display area, and hide the boundary line of the target display area.
Optionally, the processor 910 is further configured to receive a second input of the target object displayed in the target display area; and further configured to stop object recognition of the image captured by the target camera in response to the second input.
The display unit 906 is further configured to display the image captured by the target camera in the target display area.
Optionally, the plurality of display areas are at least partially overlapped, or the plurality of display areas are arranged in an array.
In the embodiment of the application, in the case that the plurality of display areas of the screen respectively display the images shot by the plurality of cameras, by receiving a first input of a target object included in a target image in the plurality of images, object recognition can be performed on the image shot by the target camera shooting the target image in response to the first input, so that the target object is displayed in the target display area corresponding to the target camera in the case that the image shot by the target camera includes the target object. According to the technical scheme, after the user selects the target object needing to be framed through the first input, the terminal can display the target object in the target display area under the condition that the image shot by the target camera comprises the target object, and the target object is tracked and displayed by the target camera. Compared with the prior art, the problem that the view object required by the user cannot be displayed because the view object is not located in the fixed area where the terminal can display the shot image is avoided, the display effect of the terminal on the shot image in the shooting process is improved, and the user experience of simultaneously using a plurality of cameras for shooting is improved.
It should be understood that, in the embodiment of the present application, the input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics Processing Unit 9041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. A touch panel 9071 also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 909 can be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 910 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the display method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method of displaying, the method comprising:
respectively displaying images shot by a plurality of cameras in a plurality of display areas of a screen;
receiving a first input of a target object included in a target image among a plurality of images;
responding to the first input, and performing object recognition on an image shot by a target camera, wherein the target camera is a camera for shooting the target image;
and displaying the target object in a target display area corresponding to the target camera under the condition that the image shot by the target camera is recognized to comprise the target object.
2. The method of claim 1, further comprising:
cutting the image of the target camera to obtain a display image, wherein the display image comprises the target object;
the displaying the target object in a target display area corresponding to the target camera includes: displaying the display image in the target display area.
3. The method of claim 2, wherein cropping the image of the target camera to obtain a display image comprises:
cutting the image shot by the target camera along the boundary line of the circumscribed rectangle of the target object to obtain a preliminary display image;
and adjusting the size of the preliminary display image to the size of the target display area to obtain the display image.
4. The method of claim 2, wherein cropping the image of the target camera to obtain a display image comprises:
cutting the image shot by the target camera along the boundary line of the circumscribed rectangle of the target object to obtain the display image;
the displaying the display image in the target display area includes: and adjusting the size of the target display area according to the size of the circumscribed rectangle, and displaying the display image in the center of the adjusted target display area.
5. The method of claim 2, wherein the display area has a boundary line, and the cropping the image of the target camera to obtain a display image comprises:
and cutting the image shot by the target camera along the edge contour of the target object to obtain a display image.
6. The method of claim 5, wherein the display area has a boundary line, and wherein the displaying the display image in the target display area comprises: and adjusting the size of the target display area to the size of the display image, displaying the display image in the adjusted target display area, and hiding the boundary line of the target display area.
7. The method of claim 1, further comprising:
receiving a second input of the target object displayed by the target display area;
and in response to the second input, stopping object recognition of the image shot by the target camera, and displaying the image shot by the target camera in the target display area.
8. The method of claim 1, wherein the plurality of display regions are at least partially overlapped or arranged in an array.
9. A display device, characterized in that the device comprises:
the display module is used for respectively displaying images shot by the cameras in a plurality of display areas of the screen;
a receiving module for receiving a first input of a target object included in a target image among the plurality of images;
the identification module is used for responding to the first input and carrying out object identification on the image shot by a target camera, wherein the target camera is a camera for shooting the target image;
the display module is further configured to display the target object in a target display area corresponding to the target camera when it is recognized that the image captured by the target camera includes the target object.
10. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the display method according to any one of claims 1 to 8.
CN202111674460.9A 2021-12-31 2021-12-31 Display method and device and electronic equipment Active CN114339050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111674460.9A CN114339050B (en) 2021-12-31 2021-12-31 Display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111674460.9A CN114339050B (en) 2021-12-31 2021-12-31 Display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114339050A true CN114339050A (en) 2022-04-12
CN114339050B CN114339050B (en) 2023-10-31

Family

ID=81021485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111674460.9A Active CN114339050B (en) 2021-12-31 2021-12-31 Display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114339050B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN110913132A (en) * 2019-11-25 2020-03-24 维沃移动通信有限公司 Object tracking method and electronic equipment
CN111177420A (en) * 2019-12-31 2020-05-19 维沃移动通信有限公司 Multimedia file display method, electronic equipment and medium
CN111316637A (en) * 2019-12-19 2020-06-19 威创集团股份有限公司 Spliced wall image content identification windowing display method and related device
JP2020154055A (en) * 2019-03-19 2020-09-24 株式会社昭和テック Image capturing device
CN111770275A (en) * 2020-07-02 2020-10-13 维沃移动通信有限公司 Shooting method and device, electronic equipment and readable storage medium
WO2021051995A1 (en) * 2019-09-17 2021-03-25 维沃移动通信有限公司 Photographing method and terminal
WO2021147482A1 (en) * 2020-01-23 2021-07-29 华为技术有限公司 Telephoto photographing method and electronic device
CN113794833A (en) * 2021-08-16 2021-12-14 维沃移动通信(杭州)有限公司 Shooting method and device and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
JP2020154055A (en) * 2019-03-19 2020-09-24 株式会社昭和テック Image capturing device
WO2021051995A1 (en) * 2019-09-17 2021-03-25 维沃移动通信有限公司 Photographing method and terminal
CN110913132A (en) * 2019-11-25 2020-03-24 维沃移动通信有限公司 Object tracking method and electronic equipment
CN111316637A (en) * 2019-12-19 2020-06-19 威创集团股份有限公司 Spliced wall image content identification windowing display method and related device
CN111177420A (en) * 2019-12-31 2020-05-19 维沃移动通信有限公司 Multimedia file display method, electronic equipment and medium
WO2021147482A1 (en) * 2020-01-23 2021-07-29 华为技术有限公司 Telephoto photographing method and electronic device
CN111770275A (en) * 2020-07-02 2020-10-13 维沃移动通信有限公司 Shooting method and device, electronic equipment and readable storage medium
CN113794833A (en) * 2021-08-16 2021-12-14 维沃移动通信(杭州)有限公司 Shooting method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵玉磊;童创明;鞠智芹;: "多区域GRECO虚拟屏幕算法分析电大尺寸目标RCS", 电讯技术, no. 11 *

Also Published As

Publication number Publication date
CN114339050B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
US9756261B2 (en) Method for synthesizing images and electronic device thereof
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN112954210B (en) Photographing method and device, electronic equipment and medium
CN102200830A (en) Non-contact control system and control method based on static gesture recognition
CN113840070B (en) Shooting method, shooting device, electronic equipment and medium
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112492201B (en) Photographing method and device and electronic equipment
CN112911147B (en) Display control method, display control device and electronic equipment
CN113873151A (en) Video recording method and device and electronic equipment
CN113873166A (en) Video shooting method and device, electronic equipment and readable storage medium
CN113655929A (en) Interface display adaptation processing method and device and electronic equipment
CN113794834A (en) Image processing method and device and electronic equipment
CN112153281A (en) Image processing method and device
CN112788244B (en) Shooting method, shooting device and electronic equipment
CN114025237B (en) Video generation method and device and electronic equipment
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN114339050B (en) Display method and device and electronic equipment
CN113873168A (en) Shooting method, shooting device, electronic equipment and medium
CN114245017A (en) Shooting method and device and electronic equipment
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN112286430A (en) Image processing method, apparatus, device and medium
CN114554097A (en) Display method, display device, electronic apparatus, and readable storage medium
CN112887621B (en) Control method and electronic device
CN113055599B (en) Camera switching method and device, electronic equipment and readable storage medium
CN106445380A (en) Multi-viewing-angle picture operating method and system and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant