CN115118879A - Image shooting and displaying method and device, electronic equipment and readable storage medium - Google Patents

Image shooting and displaying method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115118879A
CN115118879A CN202210724390.1A CN202210724390A CN115118879A CN 115118879 A CN115118879 A CN 115118879A CN 202210724390 A CN202210724390 A CN 202210724390A CN 115118879 A CN115118879 A CN 115118879A
Authority
CN
China
Prior art keywords
image
face
electronic device
parameter
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210724390.1A
Other languages
Chinese (zh)
Inventor
刘旭东
陈春辉
刘国祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210724390.1A priority Critical patent/CN115118879A/en
Publication of CN115118879A publication Critical patent/CN115118879A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses an image shooting and displaying method and device, an electronic device and a readable storage medium, and belongs to the technical field of communication. The display method comprises the following steps: receiving a first input for identification of a first image, and in response to the first input, displaying the first image based on a first parameter and a second parameter; wherein the first parameter includes: relative position information of the first object and the first electronic device; the second parameter includes: relative position information of the second object and the second electronic device.

Description

Image shooting and displaying method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of communication, and particularly relates to an image shooting and displaying method, an image shooting and displaying device, electronic equipment and a readable storage medium.
Background
In recent years, a photographing function has become one of the main functions of electronic devices, and users often record daily life using the photographing function.
In the related art, when a user takes an image, the user usually adjusts a shooting angle to compose the image and then shoots the image. However, when a user views an image displayed on a display screen, there is a distortion phenomenon that a visual effect is poor when viewing the image.
Disclosure of Invention
An embodiment of the application aims to provide an image shooting method, an image display method, an image shooting device, an image display device, electronic equipment and a readable storage medium, so as to solve the problem of poor visual effect when an image is viewed.
In a first aspect, an embodiment of the present application provides an image display method, including: receiving a first input for identification of a first image, and in response to the first input, displaying the first image based on a first parameter and a second parameter; wherein the first parameter includes: relative position information of the first object and the first electronic device; the second parameter includes: relative position information of the second object and the second electronic device.
In a second aspect, an embodiment of the present application provides an image display apparatus, including: first receiving module and display module, wherein: the first receiving module is configured to receive a first input of an identifier for a first image; the display module is used for responding to the first input received by the first receiving module and displaying the first image based on the first parameter and the second parameter; wherein the first parameter includes: relative position information of the first object and the first electronic device; the second parameter includes: relative position information of the second object and the second electronic device.
In a third aspect, an embodiment of the present application provides an image capturing method, including: receiving a second input of the user; responding to a second input, acquiring a first image through first electronic equipment, and acquiring a first parameter, wherein the first parameter comprises relative position information of a first object and the first electronic equipment; and storing the first image and the first parameter in an associated manner.
In a fourth aspect, an embodiment of the present application provides an image capturing apparatus, including: the device comprises a second receiving module, a second obtaining module and a storage module, wherein: the second receiving module is configured to receive a second input of the user; the second obtaining module is configured to, in response to the second input received by the second receiving module, acquire a first image through the first electronic device, and obtain a first parameter, where the first parameter includes relative position information of the first object and the first electronic device; the storage module is configured to store the first image and the first parameter acquired by the acquisition module in an associated manner.
In a fifth aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect or the third aspect.
In a sixth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect or according to the third aspect.
In a seventh aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect or the third aspect.
In an eighth aspect, embodiments of the present application provide a computer program product, stored in a storage medium, for execution by at least one processor to implement the method according to the first aspect or the method according to the third aspect.
In an embodiment of the present application, an image display apparatus receives a first input of an identifier of a first image from a user, and displays the first image based on a first parameter and a second parameter, where the first parameter includes: relative position information of the first object and the first electronic device, the second parameter includes: relative position information of the second object and the second electronic device. By the method, the image display apparatus can display the first image according to the relative position information of the second object and the second electronic device and the relative position information of the first object and the first electronic device when viewing the first image. Therefore, when the first image is displayed, the first image can be flexibly displayed based on the relative position information of the object and the electronic equipment, and the visual effect when the image is displayed is improved.
Drawings
Fig. 1 is a schematic flowchart of an image display method according to an embodiment of the present application;
fig. 2 is a second schematic flowchart of an image display method according to an embodiment of the present application;
fig. 3 is a third schematic flowchart of an image display method according to an embodiment of the present application;
fig. 4(a) is an interface schematic diagram of an image display method provided in an embodiment of the present application;
fig. 4(b) is a second schematic interface diagram of the image display method according to the embodiment of the present application;
fig. 5 is a schematic view of a face direction angle of a face image according to an embodiment of the present application;
fig. 6 is a second schematic diagram illustrating a face direction angle of a face image according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating an image capturing method according to an embodiment of the present disclosure;
fig. 8 is a second schematic flowchart of an image capturing method according to an embodiment of the present application;
fig. 9 is a third schematic flowchart of an image capturing method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an image display device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an image capturing apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 13 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first and second in the description and in the claims of the present application are not used to describe a particular order or sequence. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method and the display method provided by the embodiment of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The image display method provided by the embodiment of the application can be applied to the scene that the electronic equipment displays the pictures shot through the camera. The picture can be obtained by shooting the electron through a camera device, and can also be obtained by shooting other electronic devices through cameras.
In order to obtain a scene photo meeting the requirements of a user, when the user takes the photo, the user firstly adjusts the angle of the electronic device to obtain a better preview effect of the shot scene, and then triggers the electronic device to shoot. Suppose that the user a uses the rear camera of the electronic device to photograph the scenery a, and the viewing angle effect of the viewed preview picture is the best under the condition that the face of the user a is directly opposite to the display screen of the electronic device (i.e. the direction of sight is vertical to the plane of the display screen), at this moment, the user a presses the shooting key to trigger the electronic device to shoot the image 1 of the scenery a. When viewing the image 1 in the following, if the face of the user is inclined 30 ° to the left, the image 1 visually and intuitively seen by the user is inclined to some extent, so that the user's visual effect is poor.
The embodiment of the application is applied to the electronic equipment with a single camera or a plurality of cameras. The camera connected with the electronic equipment comprises a built-in camera integrated in the electronic equipment and an external camera connected in the process of using the electronic equipment. It should be noted that the camera types include, but are not limited to, a color camera, a black and white camera, an infrared camera, a moonlight camera, a starlight camera, and a general camera.
Exemplary built-in cameras integrated inside the electronic device include a built-in camera of a notebook computer, a front camera of a mobile phone, a rear camera of a mobile phone, a camera of a face recognition attendance machine, an infrared camera in a monitoring system, and the like.
For example, the external camera connected in the process of using the electronic device can adopt a wired connection mode and a wireless connection mode. The wired connection mode refers to the connection of the electronic equipment with an external camera through a USB interface, a Type-C interface, a lightning interface or a serial port and other interfaces. The wireless connection mode is that the electronic equipment establishes an information interaction channel with an external camera in a Bluetooth, infrared or wireless local area network mode and the like so as to realize connection with the external camera. In the embodiment of the present application, the connection mode of the external camera is not limited. No matter which connection mode is adopted, the electronic equipment and the external camera all comprise corresponding hardware elements.
An embodiment of the present application provides an image display method, and fig. 1 shows a flowchart of the image display method provided in the embodiment of the present application. As shown in fig. 1, an image display method provided in an embodiment of the present application may include the following steps 201 and 202:
step 201: the image display apparatus receives a first input for an identification of a first image.
Optionally, in this embodiment of the application, the first image may be an image captured by a camera of the first electronic device, and may also be an image captured by a camera of the second electronic device. The image display device may be the first electronic device or the second electronic device.
Optionally, in this embodiment of the application, the identifier of the first image may be a thumbnail, an icon, a character, and the like, which is not limited in this embodiment of the application.
Optionally, in this embodiment of the application, the first input may be a touch input, a voice input, a gesture input, or other feasible input of the identifier of the first image by the user, which is not limited in this embodiment of the application. Illustratively, the first input user's click input, slide input, press input, etc. is described above. Further, the click operation may be any number of times of click operations. The sliding operation may be a sliding operation in any direction, such as an upward sliding operation, a downward sliding operation, a leftward sliding operation, or a rightward sliding operation, which is not limited in the embodiments of the present application.
Illustratively, thumbnails of a plurality of pictures are displayed in the gallery, and the user clicks on the thumbnail of the picture 21 to view the picture 21.
Step 202: the image display device displays a first image based on the first parameter and the second parameter in response to the first input.
Wherein the first parameter includes: relative position information of the first object and the first electronic device; the second parameter includes: relative position information of the second object and the second electronic device.
Optionally, in this embodiment of the application, the first object is a photographer of the first image.
Illustratively, the first object is an object that triggers the first electronic device to take the first image. For example, the user a uses the rear camera of the electronic device to take a picture of the scene a, and when the face of the user a is directly opposite to the display screen of the electronic device (i.e. the direction of the line of sight is vertical to the plane of the display screen), the user a presses the shooting key to trigger the electronic device to shoot the image 21 of the scene a.
Optionally, in this embodiment of the present application, the second object is an object for viewing the first image. The first object and the second object may be the same object or may be different objects.
Illustratively, the second object is an object that triggers the second electronic device to display the first image. For example, the user b clicks on a thumbnail of the above-mentioned image 21 to view the picture 21.
Optionally, in this embodiment of the application, the first electronic device is an electronic device that acquires a first image.
Optionally, in this embodiment of the application, the second electronic device is an electronic device that displays the first image.
Optionally, in this embodiment of the application, the first electronic device and the second electronic device may be the same electronic device, or different electronic devices, and when the electronic devices are different electronic devices, the screen aspect ratios are similar or the same.
For example, in a case where the first electronic device and the second electronic device may be different electronic devices, the first electronic device transmits a first image to the second electronic device after photographing the first image, and the second electronic device may display the first image when a second object needs to view the first image through the second electronic device.
Optionally, in this embodiment of the application, the first parameter is relative position information of a photographer and the first electronic device when the first electronic device captures the first image. For example, the relative position information may be a gaze angle at which the first object gazes at the display screen of the first electronic device. For example, the relative position information may be an angle between the line of sight of the photographer and a plane on which the display screen of the first electronic device is located.
It should be noted that, when a user shoots a scene, in order to obtain a better visual effect, the user can look at the preview interface with both eyes to find a suitable composition, that is, the face of the user faces the display screen of the electronic device, and at this time, the line of sight of the user is approximately perpendicular to the display screen, that is, an included angle between the line of sight of the user and the plane where the display screen is located can be defaulted to 90 °.
Optionally, in this embodiment of the application, the second parameter is relative position information between an object viewing the first image and the second electronic device when the second electronic device displays the first image. For example, the relative position information may be a gaze angle at which the second object gazes at the display screen of the second electronic device. For example, the relative position information may be an angle between the line of sight of the viewer and a plane on which a display screen of the second electronic device is located.
Alternatively, the photographing apparatus may adjust the display direction of the first image based on the first parameter and the second parameter in a case where the first image has been displayed, or the photographing apparatus may determine the display direction of the first image based on the first parameter and the second parameter and display the first image in a case where the first image is not displayed.
Illustratively, when the user A views a landscape image shot through the rear camera, if the face of the user A is opposite to the display screen of the electronic equipment, the angle between the sight line of the user A and the plane of the display screen of the electronic equipment is approximately 90 degrees. Illustratively, when the user a views a landscape image captured by the rear camera, if the face of the user a is inclined to the left (in the horizontal direction) by 30 °, the angle between the line of sight of the user a and the plane on which the display screen of the electronic device is located is approximately 120 °. Illustratively, when the user A views a landscape image shot through the rear camera, if the face of the user A is inclined to the right (in the horizontal direction) by 30 degrees, the angle between the sight line of the user A and the plane of the display screen of the electronic device is approximately 60 degrees.
Under the condition that the first object and the second object are the same object and the first electronic device and the second electronic device are the same device, the visual effect that the same object views images on the same electronic device is improved. Under the condition that the first object and the second object are different objects and the first electronic device and the second electronic device are the same device, the visual effect that the images of the different objects are viewed on the same electronic device is improved. Under the condition that the first object and the second object are the same object and the first electronic device and the second electronic device are different devices, the visual effect that the same object views images on different electronic devices is improved.
In an image display method provided by an embodiment of the present application, an image display apparatus receives a first input of an identifier of a first image by a user, and displays the first image based on a first parameter and a second parameter, where the first parameter includes: the relative position information of the first object and the first electronic device, the second parameter includes: relative position information of the second object and the second electronic device. By the method, the image display apparatus can display the first image according to the relative position information of the second object and the second electronic device and the relative position information of the first object and the first electronic device when viewing the first image. Therefore, when the first image is displayed, the first image can be flexibly displayed based on the relative position information of the object and the electronic equipment, and the visual effect when the image is displayed is improved.
Optionally, in this embodiment of the present application, in combination with fig. 1 described above, as shown in fig. 2, the step 202 described above may include the following steps 202a and 202 b:
step 202 a: the image display apparatus determines a target display direction based on the first parameter and the second parameter in response to the first input.
Step 202 b: the image display device displays a first image in accordance with the target display direction.
Optionally, the target display direction represents a display direction of the first image when the first image is displayed on a display screen of the electronic device, and the display screen is taken as a reference plane. For example, in the case where the display screen is in the vertical screen display state, the image may be displayed from top to bottom in the display screen, from bottom to top, from left to right, in a diagonal direction of the display screen, or in any direction in the display screen.
Illustratively, the above-mentioned target display direction may be represented by any angle value within 0 ° to 360 °, such as 0 °, 15 °, 30 °, 90 °, 180 °, 270 °, and so on.
Alternatively, the photographing apparatus may determine a rotation angle of the first image based on a first gaze angle at which the target photographing object gazes at the display screen of the electronic device, and determine the target display direction according to the rotation angle.
For example, the photographing device may perform rotation processing on the first image according to a rotation angle of the first image, and display the first image after the rotation processing on the first interface. Illustratively, the first interface may include, but is not limited to, any one of the following: an application page of the first application, a desktop of the electronic device, a negative screen of the electronic device, and a taskbar, among others.
Illustratively, assuming that the gaze angle of the target photographic subject gazing at the display screen of the electronic device is 120 °, the face of the target photographic subject is in a state of being horizontally inclined by 30 ° to the left, and in order to maintain the consistency of the image display direction and the visual angle when viewing the image, the photographing apparatus displays the first image after rotating by 30 ° clockwise.
For example, when the user a uses the rear camera of the electronic device to shoot a landscape, the default is that the face of the user a is directly facing the display screen of the electronic device (that is, when the user shoots the landscape, the angle between the sight line of the user a and the plane where the display screen is located is 90 degrees), if the user clicks the thumbnail of the shot landscape image, the face is in a state of being inclined to the left by 30 degrees, at this time, the angle between the sight line of the user a and the plane where the display screen of the electronic device is located is approximately 120 degrees, and the first image can be displayed by rotating 30 degrees counterclockwise.
Alternatively, when the first image is displayed, the shooting device may perform pixel compensation, intelligent filling, intelligent cropping and image stereovision processing on the first image, so as to improve the visual effect of the user.
Therefore, under the condition of displaying the first image, the visual effect obtained when the user views the first image is consistent with the visual effect obtained when the user shoots the first image, and the display effect of the first image is improved.
In some possible implementations, the first object and the second object may be the same. For example, the first object is a user a who triggers capturing of the image a, and the second object is a user a who subsequently browses the image a. Alternatively, the first object is different from the second object. For example, the first object is a user a who triggers capturing of an image a, and the second object is also a user b who subsequently views the image a.
Illustratively, the first object and the second object are both the user A. The image display device may determine the display direction of the image a based on a gaze angle of the user a on the display screen of the electronic device when the user a views the image a using the electronic device and a gaze angle of the user a on the display screen of the electronic device when the user a captures the image a using the electronic device.
For example, it is assumed that the user a obtains a better preview effect when the gaze angle to the display screen is 120 ° when shooting the image a, and the user a has the gaze angle to the display screen of 150 ° when subsequently browsing the image a, and the shooting device may rotate the image a clockwise by 30 °. Therefore, when the user browses the image A subsequently, the shooting device can adjust the display direction of the image A based on the current watching angle of the user on the display screen, so that the user A keeps consistent vision when browsing the image A and shooting the image A, and the user can still obtain a better preview effect when browsing the image A subsequently.
Further optionally, in an embodiment of the present application, the first parameter includes: the first object is at a first face orientation angle relative to the first electronic device, and the second parameter comprises: a second face direction angle of the second object relative to the second electronic device;
optionally, in combination with fig. 2, as shown in fig. 3, the step 202a may include the following steps 202a 1:
step 202a 1: the image display apparatus determines a target display direction based on an angle difference between the first face direction angle and the second face direction angle in response to the first input.
Illustratively, the face direction angle is as follows: and the included angle between the straight lines of the vertical direction and the horizontal direction of the human face.
For example, the first face direction angle may represent an angle of the target photographic subject in a horizontal direction, where the face inclines to the left or right; the second face direction angle may be characterized as follows: the first photographic subject has an angle of inclination of the face to the left or right in the horizontal direction.
Illustratively, the first face direction angle is used for representing a gazing angle of the target shooting object on a display screen of the electronic device, and the second face direction angle is used for representing a gazing angle of the first shooting object on the display screen of the electronic device.
For example, the first face direction angle is taken as the face direction angle of the user a, and the second face angle is taken as the face inclination angle of the user b. As shown in fig. 4(a), when the user a shoots the trees 2a, 2b and 2c in front using the rear camera of the electronic device, and when the user a shoots the trees 21, 22 and 23 using the rear camera of the electronic device, the face direction angle of the user A acquired by the front camera is 105 degrees, if the user A sends the landscape image to the user B, when the user B browses the landscape image, the face direction angle of the user B acquired by the front camera is 120 degrees, the camera rotates the landscape image clockwise by 15 deg. (i.e., 15 deg. minus 120 deg.) to maintain the visual angle relationship between the user's line of sight and the image when viewing the image at 15 deg., fig. 4(a) is a schematic interface diagram showing the landscape image rotated by 15 ° clockwise, and the image 21 is an image obtained by rotating the landscape image by 15 ° clockwise.
With reference to the above example, if the face direction angle of the user b collected by the front camera is 60 ° when the user b browses the landscape image, the camera rotates the landscape image counterclockwise by 45 ° (i.e., 60 ° minus 105 ° is minus 45 °) to display, so as to maintain the visual angle relationship between the line of sight and the image when the user b browses the image at 15 °.
With reference to the above example, if the user b browses the landscape image, the face of the user b tilts to the left by 15 ° in the horizontal direction, that is, the face direction angle is 105 °, since the viewing angles of the face of the user b with respect to the electronic device when browsing the landscape image and when capturing the landscape image are the same, the capturing device does not rotate the landscape image, fig. 4(b) is an interface diagram displayed without rotating the landscape image, and the image 22 is an image without rotating the landscape image.
Therefore, the relationship between the view shooting and the user visual angle is recorded when the picture is shot, so that the sharing of better viewing angles among users is realized, and other users can obtain better visual effect when browsing the shot image.
It should be noted that, in a case where the face of the user is tilted 15 ° to the left in the horizontal direction, the gaze angle of the user gazing at the display screen is 105 ° (i.e., 90 ° plus 15 °), that is, in a case where the visual angle relationship between the user and the display screen is 105 °, the visual angle effect of the user viewing the image is good, and therefore, in a case where the user subsequently views the image, the visual angle relationship between the user and the image is adjusted to 105 °, so that the user obtains a good visual effect.
Further optionally, before the step 202a1, the image display method provided in the embodiment of the present application may further include the following steps a1 and a 2:
step A1: the image display device acquires a second image through a camera of the second electronic device.
Wherein, the second image comprises a second object.
Step A2: the image display device determines a face direction angle of the second object according to the second image.
Illustratively, the camera may be a front camera of the second electronic device.
For example, the image display apparatus may acquire the second image through a camera of the second electronic device upon receiving a first input of a user identification for the first image.
Illustratively, the first image is taken as the landscape image 1, and the second object is taken as the user A. Assuming that the first user clicks the thumbnail of the landscape image 1, the shooting device acquires the image containing the face of the first user through the front camera under the condition that the face of the first user is detected.
In one example, the image display apparatus may acquire, by a camera of the second electronic device, a second face direction angle of the second object with respect to the second electronic device when the second object views the first image using the second electronic device.
In another example, the image display apparatus may capture an image of a second object, that is, a second image, through a camera of a second electronic device when the second object views the first image using the second electronic device, and then determine a second face direction angle of the second object with respect to the second electronic device according to face feature information of the second object in the image. For example, the face direction angle may be: and the angle between a first straight line in the vertical direction of the face in the image and a second straight line where the shortest connecting line between the two side edges of the image is located.
Exemplarily, as shown in fig. 5, a schematic diagram of a face direction angle provided in the embodiment of the present application is shown. A straight line a shown in fig. 5 is a first straight line in the vertical direction of the face in the image, a straight line B shown in fig. 5 is a second straight line where a shortest connecting line between two side edges of the image is located, wherein an included angle α between the first straight line and the second straight line is 120 °, that is, an angle of the face direction is 120 °.
Illustratively, the face feature information includes: the position and direction of the face features in the image. Illustratively, the above-mentioned facial features may be understood as facial feature points, or key points. Illustratively, the facial features (points) may include at least one of: left eye, right eye, nose, left corner of mouth, right corner of mouth, left ear, and right ear, and so forth. In the field of image processing, the face features are generally calculated as one point.
It should be noted that the facial features are features that do not change based on the position of the facial image of the user. Therefore, the inclination degree of the face image of the user in the screen can be represented by the position of the face feature in the screen.
Illustratively, the face feature is a feature that is distributed horizontally in the face image, and the target feature can indicate the horizontal direction based on the face image. For example, the tilt angle of the face in the horizontal direction (i.e., left and right) can be determined by the positions of the key points of the left eye, the right eye, the nose, the left mouth corner, and the right mouth corner.
For example, the image display apparatus may determine a second face direction angle of the second object with respect to the second electronic device according to face feature information of the second object in the second image.
The following describes in detail a process of determining a second face direction angle of a second object with respect to a second electronic device according to face feature information of the second object in a second image (the process may include the following Step1-Step 5):
step 1: the position of a face feature point of a second object in a second image is detected.
For example, the camera may input the second image into a deep learning network, and preliminarily detect the position of the human face feature point in the reference image through a deep learning algorithm.
Illustratively, the deep learning network is mainly divided into three modules of feature extraction, feature fusion and task prediction. The feature extraction comprises 5 stages above C2, C3, C4, C5 and C6, wherein each stage comprises operations of pooling, convolution, batch normalization and activation, and is mainly used for extracting features in the images. The feature fusion comprises 5 stages of P2, P3, P4, P5 and Context Module, and also comprises upsampling and fusion of features of different stages (including feature map addition and channel splicing) in addition to pooling, convolution, batch normalization and activation. The task prediction stage comprises the detection of a face frame and the detection of face key points.
Further, C2/C3/C4/C5 are Feature maps generated by respective Residual blocks in ResNet, and C6 is generated by C5 through 3 × 3 convolutional layers.
Step 2: the reference face is subjected to correction processing based on the position of the face feature point in the second image.
Illustratively, in the case that a plurality of faces are detected to be included in the second image, the shooting device selects the face with the largest area as the target face for calculating the face direction.
Illustratively, the photographing apparatus calculates the orientation of the face from the coordinates of the left and right eyes and the left and right mouth angles of the target face, and adjusts the target face to a position closest to the forward face (i.e., such that the reference face direction angle is close to or equal to 0 °) according to the orientation of the face by rotating by 90 degrees, rotating by 180 degrees, rotating by 270 degrees, and horizontally flipping. Therefore, the face direction is corrected in advance, and the accuracy of subsequent face key point detection is improved.
Step 3: and generating a target face image according to the face area in the target face after the face correction processing.
Step 4: and adjusting the image size of the target face image, and detecting the position of the key point of the target face image after the image size is adjusted.
Illustratively, the photographing device scales the target face image after face correction, for example, scales the target face image to 48 × 48, and inputs the scaled target face image into the face key point detection network to obtain the key point coordinates of the adjusted face.
Illustratively, the shooting device performs reverse adjustment on the coordinates of the key points based on the correction angle during face correction and the relative coordinate relationship of the second image of the target face image, maps the coordinates onto the reference face in the second image, and calculates the coordinates of the key points of the global reference face according to the coordinate position of the reference face.
It should be noted that, because the positions of the key points of the eyes, nose, and mouth corners in the reference image need to be detected, if these 5 key points are directly obtained from the reference image, the accuracy is low, so the human face and the key point positions with low accuracy can be detected first, although the key point positions with low accuracy are not taken as the final result, the human face can be simply determined to be vertical or horizontal, if the human face is horizontal, the human face is adjusted to be vertical, because the accuracy of key point detection by uniformly using the vertical human face is higher, and then the reference human face image with the adjusted direction is input into the second network, so as to obtain the key point positions with higher accuracy.
Step 5: and calculating the face direction angle of the reference face in the second image according to the positions of the key points of the target face image.
For example, the photographing device may fit coordinates of midpoints of both eyes (x1, y1), coordinates of midpoints of both mouth angles (x2, y2), and coordinates of a nose (x3, y3) by a least square method, take a fitted straight line as a direction of the reference face, and take an angle of the straight line with a horizontal straight line as an angle of the face direction. As shown in fig. 6, the midpoint of the eyes is represented by a coordinate point a, the midpoint of the mouth angles is represented by a coordinate point b, the nose is represented by a coordinate point C, the fitted straight line is a straight line C, the straight line in the horizontal direction is a straight line D, the angle of the face direction is 120 °, and the angle of the face inclination to the left is 30 °.
In other possible embodiments, the above algorithm for calculating the face direction angle may be hardened by using a chip, and enter into Always on mode, i.e. a mode in which the chip maintains "low resolution + low frame rate + long time output image data".
Illustratively, in combination with the above embodiment, when calculating the face direction angle of the second image, a larger image size (e.g. 640 × 480) may be directly input into the deep learning network algorithm for calculation, thereby improving the calculation accuracy.
Illustratively, the deep learning network can be designed as a larger network, thereby achieving higher accuracy.
So, the camera can detect the key point of user's face in the very short time accurately, operates at the background simultaneously, can not disturb other actions of user to can take the momentary people's face direction angle of second image shooting of real-time recording, thereby carry out real-time adjustment to the direction of display of first image.
It should be noted that, when a user browses a photo, the position and angle of the user may change in real time, so that a calculation process of the face direction angle of the user needs to be triggered in real time, the calculation process is hardened by a chip and enters an Always on mode, so that the power is kept low, the performance is high, the power consumption is low, and the operation is performed in the background, which does not affect the feeling of the user browsing the photo.
Therefore, the second electronic device can shoot the user when the user views the first image to obtain the image including the face of the user, and the face direction angle of the user relative to the electronic device when the user views the first image is obtained through the image, so that the first image can be displayed based on the face direction angle of the user relative to the electronic device when the first image is displayed subsequently, and the flexibility of image display is improved.
Optionally, in an embodiment of the present application, the second image includes: at least two third objects;
optionally, after the step a1, the image display method according to the embodiment of the present application further includes the following steps B1 and B2:
step B1: the image display device acquires face feature information of each third object in the second image.
Step B2: the image display apparatus determines a third object satisfying a preset condition as a second object based on the face feature information of each third object.
Wherein, the preset condition is satisfied by any one of the following items:
the size of the face is maximum;
the definition of the face is greater than a first threshold value;
the inclination angle of the face is within a preset angle range.
For example, when the image display apparatus captures a plurality of objects (i.e., a third object), the photographing apparatus may determine the second object based on face information of the plurality of objects in the second image while browsing the first image.
Illustratively, assuming that the second image includes face information of a subject a, a subject b and a subject c, the size of the face of the subject a is 100 × 90, the size of the face of the subject b is 120 × 100, and the size of the face of the subject c is 110 × 100, since the size of the face of the subject b is the largest, the subject b is determined as the target photographic subject, and the display direction of the first image is determined according to the face direction angle of the subject b.
For example, the image display apparatus may determine an object whose sharpness of a human face is the greatest as the target photographic object. Exemplarily, assuming that the second image includes face information of a subject a, a subject b and a subject c, and the definition of the face of the subject a is the highest, the subject a is determined as the second subject, and the display direction of the first image is determined according to the face direction angle of the subject a.
For example, the photographing device may determine an object having a largest face direction angle as the second object. Exemplarily, assuming that the second image includes face information of a subject a, a subject b and a subject c, and the face direction angle of the subject c is the largest, the subject c is determined as the second subject, and the display direction of the first image is determined according to the face direction angle of the subject c.
In this way, in the case where a plurality of objects viewing the first image are detected, the photographing apparatus may determine one object from among the objects based on the above conditions, and adjust the display direction of the first image according to the face direction angle of the object, so that the object maintains a preferable angle of view when viewing the image.
The embodiment of the application provides an image shooting method, and fig. 7 shows a flowchart of the image shooting method provided by the embodiment of the application. As shown in fig. 7, the image capturing method provided in the embodiment of the present application may include the following steps 701 to 703:
step 701: the image capture device receives a second input from the user.
Optionally, in this embodiment of the present application, the second input is used to trigger the electronic device to perform a shooting operation. For example, the second input may be: the user inputs the shooting preview interface by touch or other feasibility inputs, which is not limited in the embodiment of the present application. Similarly, the input mode of the camera selection input may be a touch mode such as long pressing, point touching, dragging or sliding, or other feasible input, which is not limited in the embodiment of the present application.
Step 702: the image shooting device responds to the second input, collects a first image through the first electronic equipment, and obtains a first parameter.
The first parameter includes relative position information of the first object and the first electronic device.
Optionally, in this embodiment of the application, the first object is a photographer of the first image.
Illustratively, the first object is an object that triggers the first electronic device to take the first image. For example, the user a uses the rear camera of the electronic device to take a picture of the scenery a, and when the face of the user a is directly opposite to the display screen of the electronic device (i.e. the direction of sight is vertical to the plane of the display screen), the user a presses the shooting key to trigger the electronic device to shoot an image of the scenery a.
Optionally, in this embodiment of the application, the first parameter is relative position information of the photographer and the first electronic device when the first electronic device captures the first image. For example, the relative position information may be a gaze angle at which the first object gazes at the display screen of the first electronic device. For example, the relative position information may be an angle between the line of sight of the photographer and a plane on which the display screen of the first electronic device is located.
Illustratively, the first parameter may include: the first object may be at a first facial orientation angle relative to the first electronic device, which may be indicative of a gaze angle at which the first object gazes at a display screen of the first electronic device.
Alternatively, in this embodiment of the application, the image capturing apparatus may obtain the first parameter from a local or server.
Step 703: the image shooting device stores the first image and the first parameter in an associated manner.
Optionally, in this embodiment of the application, the image capturing apparatus may establish an association relationship between the first image and the first parameter, and then store the first image and the first parameter in association, or after storing the first image and the first parameter, the image capturing apparatus may establish an association relationship between a storage space of the first image and a storage space of the first parameter, so that when the first image is viewed, the first parameter can be quickly acquired. Optionally, the storing of the first image and the first parameter association by the image capturing device is also in response to a second input by the user.
In the image shooting method provided by the embodiment of the application, the image shooting device receives a second input of a user, the first image is collected through the first electronic equipment, the first parameter is obtained, the first image and the first parameter are stored in an associated mode, and the first parameter comprises relative position information of the first object and the first electronic equipment. By the method, the image shooting device can acquire and store the first parameter under the condition of shooting the first image, so that the first image can be flexibly displayed based on the first parameter when the first image is displayed subsequently.
Optionally, in this embodiment of the present application, in combination with fig. 7 described above, as shown in fig. 8, the step 702 described above may include the following steps 702a to 702 c:
step 702 a: the image capture device captures a first image through a first camera of the first electronic device in response to the second input.
Step 702 b: the image shooting device collects a third image through a second camera of the first electronic equipment.
Wherein the content of the third image includes the first object.
Step 702 c: the image capturing device acquires a first parameter from the first image and the third image.
Illustratively, the first camera may be a rear camera or a front camera of the first electronic device. Illustratively, the image capturing device may be the first electronic device or the second electronic device. For example, the image capturing device and the image display device may be the same electronic apparatus or different electronic apparatuses.
For example, the image capturing device may control the first camera to capture the target object, and obtain a first image corresponding to the target object. Illustratively, the target image content may be scenery, portrait, building, still, and the like.
Illustratively, the second camera may be a front camera of the first electronic device.
For example, the capturing of the first image and the third image by the image capturing device may be performed simultaneously, or may be performed after. For example, in response to the second input, the image capturing device first captures a first image through the rear camera, and then captures a third image through the front camera; or the image shooting device responds to the second input, and collects a third image through the front camera while collecting the first image through the rear camera. For example, the image capturing means may capture the third image by the second camera at the instant the first image is captured.
The third image may be referred to as a target reference image.
It should be noted that the image capturing apparatus may store the first image and the first parameter in association with each other in response to the second input.
Illustratively, the first object may be a photographer of the first image.
For example, assuming that the user a takes an image by using the rear camera of the electronic device, the electronic device starts the front camera to take the image of the user a while the user a adjusts the shooting content and the shooting angle of view and presses the shooting key, so as to obtain a third image including the facial image of the user a.
For example, the image capturing device may obtain the first parameter based on the facial feature information of the first object in the third image.
For example, the image capturing device may determine a first face direction angle of the first object with respect to the first electronic device based on the facial feature information of the first object in the third image.
The following describes in detail a process of determining a first face direction angle of a first object with respect to a first electronic device based on facial feature information of the first object in a third image.
Step 1: the position of a face feature point of the first object in the third image is detected.
For example, the image capturing apparatus may input the third image into the deep learning network, and preliminarily detect the position of the face feature point in the third image through a deep learning algorithm.
Step 2: the reference face is subjected to correction processing based on the position of the face feature point in the third image.
The reference face is a face image of the first object.
Illustratively, in the case where it is detected that a plurality of faces are included in the third image, the image capturing apparatus selects a face having the largest area as a reference face for calculating the face direction.
Illustratively, the image capturing apparatus calculates the orientation of the face from the coordinates of the left and right eyes and the left and right mouth angles of the reference face, and adjusts the reference face to a position closest to the face of the person in the forward direction (i.e., such that the reference face direction angle is close to or equal to 0 °) from the orientation of the face by rotating by 90 degrees, rotating by 180 degrees, rotating by 270 degrees, and horizontally flipping. Therefore, the face direction is corrected in advance, and the accuracy of subsequent face key point detection is improved.
Step 3: and generating a standard face image according to the face area in the standard face after the face correction processing.
Step 4: and adjusting the image size of the reference face image, and detecting the position of the key point of the reference face image after the image size is adjusted.
Illustratively, the image capturing device scales the reference face image after face correction, for example, scales the reference face image to 48 × 48, and inputs the scaled reference face image into the face keypoint detection network to obtain the keypoint coordinates of the adjusted face.
Illustratively, the image capturing apparatus performs inverse adjustment on the key point coordinates based on the correction angle during face correction and the relative coordinate relationship between the reference face image and the third image, maps the key point coordinates onto the reference face in the third image, and calculates the global key point coordinates of the reference face according to the coordinate position of the reference face.
It should be noted that, since the positions of the key points of the eyes, nose, and mouth corners in the reference image need to be detected, if these 5 key points are directly obtained from the third image, the accuracy is low, so the face and the key point positions with low accuracy can be detected first, and although the key point positions with low accuracy are not taken as the final result, the face can be simply determined to be vertical or horizontal, if the face is horizontal, the face is adjusted to be vertical, because the accuracy of key point detection by uniformly using the vertical face is higher, and then the reference face image with the adjusted direction is input into the second network, so as to obtain the key point positions with higher accuracy.
Step 5: and calculating the face direction angle of the reference face in the third image according to the positions of the key points of the reference face image.
For example, the photographing apparatus may fit coordinates of midpoints of both eyes (x1, y1), coordinates of midpoints of both mouth angles (x2, y2), and coordinates of a nose (x3, y3) by a least square method, take a fitted straight line as a direction of a reference face, and take an angle of the straight line with a horizontal straight line as an angle of the face direction. As shown in fig. 6, the midpoint of the eyes is represented by a coordinate point a, the midpoint of the mouth angles is represented by a coordinate point b, the nose is represented by a coordinate point C, the straight line fitted is a straight line C, the straight line in the horizontal direction is a straight line D, the angle of the face direction is 120 °, and the angle of the face inclination to the left is 30 °.
Optionally, the image capturing device may start to calculate the face direction information of the third image while acquiring the third image, or may calculate the face direction information of the third image at any time after acquiring the third image, which is not limited in this embodiment of the present application.
Optionally, in combination with the foregoing embodiment, in a case that the face direction angle of the first object in the third image is obtained through calculation, the image capturing apparatus may write the angle information of the face direction angle as the image information of the third image into the information header of the third image, and store the information header in the electronic device together with the third image.
Further optionally, in this embodiment of the application, with reference to fig. 8, as shown in fig. 9, the content of the third image includes: at least two objects.
The above step 702b may include the following step 702b 1:
step 702b 1: the image shooting device collects a third image through a second camera of the first electronic device, and determines a first object from at least two objects based on a face image of each object in the third image.
For example, in the case of capturing the first image, if the third image acquired by the image capturing apparatus through the front camera includes a plurality of objects, that is, the plurality of objects are capturing the first image, the image capturing apparatus may determine the first object based on face information of the plurality of objects in the third image.
Exemplarily, assuming that the third image includes face information of the object a, the object b, and the object c, the face size of the object a is 100 × 90, the face size of the object b is 120 × 100, and the face size of the object c is 110 × 100, since the face size of the object b is the largest, the object b is determined as the first object.
In this way, when the first image is captured, if a plurality of objects are detected, the image capturing apparatus may determine one object from the plurality of objects based on the above condition, and adjust the display direction of the first image according to the face direction angle of the object, so that the user maintains a preferred angle of view when viewing the first image.
According to the shooting method provided by the embodiment of the application, the execution main body can be a shooting device. The embodiment of the present application takes an example in which a shooting device executes a shooting method, and the shooting device provided in the embodiment of the present application is described.
Fig. 10 shows that the embodiment of the present application provides an image display apparatus 1000, and the image display apparatus 1000 may include: a first receiving module 1001 and a display module 1002, wherein: the first receiving module 1001 is configured to receive a first input of an identifier for a first image; the display module is configured to display a first image based on a first parameter and a second parameter in response to the first input received by the first receiving module 1001; wherein the first parameter includes: relative position information of the first object and the first electronic device; the second parameter includes: relative position information of the second object and the second electronic device.
Optionally, in an embodiment of the present application, the apparatus further includes: a determination module; the determining module is configured to determine a target display direction based on the first parameter and the second parameter; the display module is specifically configured to display the first image according to the target display direction determined by the determination module.
Optionally, in an embodiment of the present application, the first parameter includes: the first object is at a first face direction angle with respect to the first electronic device, and the second parameter includes: a second face direction angle of the second object relative to the second electronic device; the determining module is specifically configured to determine the target display direction based on an angle difference between a first face direction angle and the second face direction angle.
Optionally, in an embodiment of the present application, the apparatus further includes: a first acquisition module;
the first obtaining module is configured to obtain a second image through a camera of a second electronic device before determining a target display direction based on an angle difference between a first face direction angle and the second face direction angle, where the second image includes a second object;
the determining module is further configured to determine a face direction angle of the second object according to the second image acquired by the first acquiring module.
Alternatively, in the embodiments of the present application,
the second image includes: at least two third objects;
the first obtaining module is further configured to obtain face feature information of each third object in the second image;
the determining module is further configured to determine, as the second object, the third object that meets a preset condition based on the face feature information of each third object acquired by the first acquiring module;
wherein, the preset condition is satisfied by any one of the following items:
the size of the face is maximum;
the definition of the face is greater than a first threshold value;
the inclination angle of the face is within a preset angle range.
In the image display apparatus provided in the embodiment of the present application, the image display apparatus receives a first input of an identifier of a first image by a user, and displays the first image based on a first parameter and a second parameter, where the first parameter includes: the relative position information of the first object and the first electronic device, the second parameter includes: relative position information of the second object and the second electronic device. By the method, the image display apparatus can display the first image according to the relative position information of the second object and the second electronic device and the relative position information of the first object and the first electronic device when viewing the first image. Therefore, when the image display device displays the first image, the first image can be flexibly displayed based on the relative position information of the object and the electronic equipment, and the visual effect when the image is displayed is improved.
The image display device in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image display device provided in the embodiment of the present application can implement each process implemented in the method embodiments of fig. 1 to fig. 6, and is not described herein again to avoid repetition.
Fig. 11 shows that the embodiment of the present application provides an image capturing apparatus 2000, and the image capturing apparatus 2000 may include: a second receiving module 2001, a second obtaining module 2002 and a storing module 2003, wherein: the second receiving module 2001 is configured to receive a second input from the user; the second obtaining module 2002 is configured to, in response to a second input received by the second receiving module 2001, acquire a first image through the first electronic device, and obtain a first parameter, where the first parameter includes relative position information of the first object and the first electronic device; the storage module 2003 is configured to associate and store the first image and the first parameter acquired by the second acquiring module 2002.
Optionally, in an embodiment of the present application, the second obtaining module is specifically configured to collect a first image through a first camera of the first electronic device; acquiring a third image through a second camera of the first electronic equipment, wherein the content of the third image comprises a first object; and acquiring a first parameter according to the first image and the third image.
Optionally, in an embodiment of the present application, the content of the third image includes: at least two objects;
the second obtaining module is specifically configured to collect a third image through a second camera of the first electronic device; and determining a first object from at least two objects based on the face image of each object in the third image.
In the image capturing device provided by the embodiment of the application, the image capturing device receives a second input of a user, acquires a first image through first electronic equipment, acquires a first parameter, and stores the first image and the first parameter in an associated manner, wherein the first parameter includes relative position information of a first object and the first electronic equipment. By the method, the image shooting device can acquire and store the first parameter under the condition of shooting the first image, so that the first image can be flexibly displayed based on the first parameter when the first image is displayed subsequently.
The image capturing apparatus in the embodiment of the present application may be an electronic device, or may be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a television (television, TV), an assistant, or a self-service machine, and the embodiments of the present application are not limited in particular.
The image capturing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiment of the present application.
The image capturing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 7 to 9, and is not described here again to avoid repetition.
Optionally, as shown in fig. 12, an electronic device 3000 is further provided in the embodiment of the present application, and includes a processor 3001 and a memory 3002, where the memory 3002 stores a program or an instruction that can be executed on the processor 3001, and when the program or the instruction is executed by the processor 3001, the steps of the foregoing shooting method embodiment are implemented, and the same technical effects can be achieved, and are not described again here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 13 is a schematic hardware structure diagram of an electronic device implementing the embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
In some possible implementations, the user input unit 107 is configured to receive a first input of an identifier for a first image; the above-mentioned display unit 106, which is used for responding to the first input received by the user input unit 107, and displaying the first image based on the first parameter and the second parameter; wherein the first parameter includes: relative position information of the first object and the first electronic device; the second parameter includes: relative position information of the second object and the second electronic device.
Optionally, in this embodiment of the application, the processor 110 is configured to determine a target display direction based on the first parameter and the second parameter; the display unit 106 is specifically configured to display the first image according to the target display direction determined by the processor 110.
Optionally, in an embodiment of the present application, the first parameter includes: the first object is at a first face direction angle with respect to the first electronic device, and the second parameter includes: a second face direction angle of the second object relative to the second electronic device; the processor 110 is specifically configured to determine the target display direction based on an angle difference between the first face direction angle and the second face direction angle.
Optionally, in this embodiment of the application, the processor 110 is configured to, before determining the target display direction based on an angle difference between a first face direction angle and a second face direction angle, obtain a second image by using a camera of a second electronic device, where the second image includes a second object;
the processor 110 is further configured to determine a face direction angle of the second object according to the acquired second image.
Alternatively, in the embodiments of the present application,
the second image includes: at least two third objects;
the processor 110 is further configured to obtain face feature information of each third object in the second image;
the processor 110 is further configured to determine, as the second object, the third object meeting a preset condition based on the obtained face feature information of each third object;
wherein, the meeting of the preset condition includes any one of the following:
the size of the face is maximum;
the definition of the face is greater than a first threshold value;
the inclination angle of the face is within a preset angle range.
In the electronic device provided in the embodiment of the present application, the electronic device receives a first input of an identifier of a first image by a user, and displays the first image based on a first parameter and a second parameter, where the first parameter includes: relative position information of the first object and the first electronic device, the second parameter includes: relative position information of the second object and the second electronic device. By the method, the image display apparatus can display the first image according to the relative position information of the second object and the second electronic device and the relative position information of the first object and the first electronic device when viewing the first image. Therefore, when the electronic equipment displays the first image, the first image can be flexibly displayed based on the relative position information of the object and the electronic equipment, and the visual effect of the displayed image is improved.
In other possible implementations, the user input unit 107 is configured to receive a second input from a user; the processor 110 is configured to, in response to a second input received by the user input unit 107, acquire a first image through the first electronic device, and acquire a first parameter, where the first parameter includes relative position information of the first object and the first electronic device; the memory 109 is configured to store the acquired first image and the first parameter in an associated manner.
Optionally, in this embodiment of the application, the input unit 104 is configured to acquire a first image through a first camera of a first electronic device; acquiring a third image through a second camera of the first electronic device, wherein the content of the third image comprises a first object; the processor 110 is specifically configured to obtain the first parameter according to the first image and the third image.
Optionally, in an embodiment of the present application, the content of the third image includes: at least two objects;
the input unit 104 is specifically configured to acquire a third image through a second camera of the first electronic device; the processor 110 is further configured to determine a first object from at least two objects based on the face image of each object in the third image.
In the electronic device provided by the embodiment of the application, the electronic device receives a second input of a user, acquires a first image through the first electronic device, acquires a first parameter, and stores the first image and the first parameter in an associated manner, wherein the first parameter includes relative position information of a first object and the first electronic device. By the method, the electronic equipment can acquire and store the first parameter under the condition of shooting the first image, so that the first image can be flexibly displayed based on the first parameter when the first image is displayed subsequently.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, memory 109 may include volatile memory or non-volatile memory, or memory 109 may include both volatile and non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). Memory 109 in the embodiments of the subject application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor, which mainly handles operations related to the operating system, user interface, application programs, etc., and a modem processor, which mainly handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
Embodiments of the present application provide a computer program product, which is stored in a storage medium and executed by at least one processor to implement the processes of the above image display method or the image capturing method embodiments, and achieve the same technical effects, and in order to avoid repetition, the descriptions of the processes are omitted here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (18)

1. An image display method, characterized in that the method comprises:
receiving a first input for identification of a first image;
displaying the first image based on a first parameter and a second parameter in response to the first input;
wherein the first parameter comprises: relative position information of the first object and the first electronic device; the second parameter includes: relative position information of the second object and the second electronic device.
2. The method of claim 1, wherein displaying the first image based on the first parameter and the second parameter comprises:
determining a target display direction based on the first parameter and the second parameter;
and displaying the first image according to the target display direction.
3. The method of claim 2, wherein the first parameter comprises: the first object is at a first face orientation angle relative to the first electronic device, and the second parameter comprises: a second face direction angle of the second object relative to the second electronic device;
the determining a target display direction based on the first parameter and the second parameter includes:
and determining the target display direction based on the angle difference between the first face direction angle and the second face direction angle.
4. The method of claim 3, wherein before determining the target display orientation based on the angular difference between the first face orientation angle and the second face orientation angle, the method further comprises:
acquiring a second image through a camera of the second electronic device, wherein the second image comprises the second object;
and determining the face direction angle of the second object according to the second image.
5. The method of claim 4,
the second image includes: at least two third objects;
after the second image is acquired by the camera of the second electronic device, the method further includes:
acquiring the face feature information of each third object in the second image;
determining the third objects meeting preset conditions as the second objects based on the face feature information of each third object;
wherein, the meeting the preset condition comprises any one of the following items:
the size of the face is maximum;
the definition of the face is greater than a first threshold value;
the inclination angle of the face is within a preset angle range.
6. An image capturing method, characterized in that the method comprises:
receiving a second input of the user;
responding to the second input, acquiring a first image through first electronic equipment, and acquiring a first parameter, wherein the first parameter comprises relative position information of a first object and the first electronic equipment;
and storing the first image and the first parameter in an associated manner.
7. The method of claim 6, wherein acquiring the first image and acquiring the first parameter by the first electronic device comprises:
acquiring a first image through a first camera of first electronic equipment;
acquiring a third image through a second camera of the first electronic device, wherein the content of the third image comprises the first object;
and acquiring a first parameter according to the first image and the third image.
8. The method of claim 7, wherein the content of the third image comprises: at least two objects;
the acquiring, by a second camera of the first electronic device, a third image, content of the third image including the first object, includes:
and acquiring a third image through a second camera of the first electronic equipment, and determining the first object from the at least two objects based on the face image of each object in the third image.
9. An image display apparatus, characterized in that the apparatus comprises: first receiving module and display module, wherein:
the first receiving module is used for receiving a first input aiming at the identification of the first image;
the display module is used for responding to the first input received by the first receiving module and displaying the first image based on a first parameter and a second parameter;
wherein the first parameter comprises: relative position information of the first object and the first electronic device; the second parameter includes: relative position information of the second object and the second electronic device.
10. The apparatus of claim 9, further comprising: a determination module;
the determining module is used for determining a target display direction based on the first parameter and the second parameter;
the display module is specifically configured to display the first image according to the target display direction determined by the determination module.
11. The apparatus of claim 10, wherein the first parameter comprises: the first object is at a first face orientation angle relative to the first electronic device, and the second parameter comprises: a second face direction angle of the second object relative to the second electronic device;
the determining module is specifically configured to determine the target display direction based on an angle difference between the first face direction angle and the second face direction angle.
12. The apparatus of claim 11, further comprising: a first acquisition module;
the first obtaining module is configured to obtain a second image through a camera of the second electronic device before determining the target display direction based on an angle difference between the first face direction angle and the second face direction angle, where the second image includes the second object;
the determining module is further configured to determine a face direction angle of the second object according to the second image acquired by the first acquiring module.
13. The apparatus of claim 12,
the second image includes: at least two third objects;
the first obtaining module is further configured to obtain face feature information of each third object in the second image;
the determining module is further configured to determine, as the second object, the third object that meets a preset condition based on the face feature information of each third object acquired by the first acquiring module;
wherein, the meeting the preset condition comprises any one of the following items:
the face size is largest;
the definition of the face is larger than a first threshold value;
the inclination angle of the face is within a preset angle range.
14. An image capturing apparatus, characterized in that the apparatus comprises: the device comprises a second receiving module, a second obtaining module and a storage module, wherein:
the second receiving module is used for receiving a second input of the user;
the second obtaining module is configured to, in response to the second input received by the second receiving module, acquire a first image through a first electronic device, and obtain a first parameter, where the first parameter includes relative position information of a first object and the first electronic device;
the storage module is configured to store the first image and the first parameter acquired by the second acquisition module in an associated manner.
15. The apparatus of claim 14,
the second acquisition module is specifically used for acquiring a first image through a first camera of the first electronic device; acquiring a third image through a second camera of the first electronic device, wherein the content of the third image comprises the first object; and acquiring a first parameter according to the first image and the third image.
16. The apparatus of claim 15, wherein the content of the third image comprises: at least two objects;
the second obtaining module is specifically configured to collect a third image through a second camera of the first electronic device; determining the first object from the at least two objects based on a face image of each of the objects in the third image.
17. An electronic device, comprising a processor and a memory, said memory storing a program or instructions executable on said processor, said program or instructions, when executed by said processor, implementing the image display method of any one of claims 1 to 5, or the steps of the image capturing method of any one of claims 6 to 8.
18. A readable storage medium, characterized in that a program or instructions are stored thereon, which program or instructions, when executed by a processor, carry out the steps of the image display method according to any one of claims 1 to 5, or the image capture method according to any one of claims 6 to 8.
CN202210724390.1A 2022-06-23 2022-06-23 Image shooting and displaying method and device, electronic equipment and readable storage medium Pending CN115118879A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210724390.1A CN115118879A (en) 2022-06-23 2022-06-23 Image shooting and displaying method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210724390.1A CN115118879A (en) 2022-06-23 2022-06-23 Image shooting and displaying method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115118879A true CN115118879A (en) 2022-09-27

Family

ID=83328653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210724390.1A Pending CN115118879A (en) 2022-06-23 2022-06-23 Image shooting and displaying method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115118879A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112702527A (en) * 2020-12-28 2021-04-23 维沃移动通信(杭州)有限公司 Image shooting method and device and electronic equipment
CN113873148A (en) * 2021-09-14 2021-12-31 维沃移动通信(杭州)有限公司 Video recording method, video recording device, electronic equipment and readable storage medium
WO2022022633A1 (en) * 2020-07-31 2022-02-03 维沃移动通信有限公司 Image display method and apparatus, and electronic device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022022633A1 (en) * 2020-07-31 2022-02-03 维沃移动通信有限公司 Image display method and apparatus, and electronic device
CN112702527A (en) * 2020-12-28 2021-04-23 维沃移动通信(杭州)有限公司 Image shooting method and device and electronic equipment
CN113873148A (en) * 2021-09-14 2021-12-31 维沃移动通信(杭州)有限公司 Video recording method, video recording device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
WO2019134516A1 (en) Method and device for generating panoramic image, storage medium, and electronic apparatus
US20160286131A1 (en) Method and apparatus for displaying self-taken images
CN110636276B (en) Video shooting method and device, storage medium and electronic equipment
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN112637500B (en) Image processing method and device
CN113329172B (en) Shooting method and device and electronic equipment
US11949986B2 (en) Anti-shake method, anti-shake apparatus, and electronic device
CN111083371A (en) Shooting method and electronic equipment
US9088720B2 (en) Apparatus and method of displaying camera view area in portable terminal
CN112532881A (en) Image processing method and device and electronic equipment
CN112784081A (en) Image display method and device and electronic equipment
CN109934168B (en) Face image mapping method and device
US20150009123A1 (en) Display apparatus and control method for adjusting the eyes of a photographed user
CN115499589A (en) Shooting method, shooting device, electronic equipment and medium
CN114697530B (en) Photographing method and device for intelligent view finding recommendation
CN114785957A (en) Shooting method and device thereof
CN114241127A (en) Panoramic image generation method and device, electronic equipment and medium
CN115278049A (en) Shooting method and device thereof
CN115118879A (en) Image shooting and displaying method and device, electronic equipment and readable storage medium
KR20210112390A (en) Filming method, apparatus, electronic device and storage medium
WO2023206475A1 (en) Image processing method and apparatus, electronic device and storage medium
CN112887621B (en) Control method and electronic device
CN114071009B (en) Shooting method and equipment
CN115242976A (en) Shooting method, shooting device and electronic equipment
CN117745528A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination