CN116342639A - Image display method, electronic device and medium - Google Patents

Image display method, electronic device and medium Download PDF

Info

Publication number
CN116342639A
CN116342639A CN202111579874.3A CN202111579874A CN116342639A CN 116342639 A CN116342639 A CN 116342639A CN 202111579874 A CN202111579874 A CN 202111579874A CN 116342639 A CN116342639 A CN 116342639A
Authority
CN
China
Prior art keywords
clipping
face
target
mobile phone
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111579874.3A
Other languages
Chinese (zh)
Inventor
苏达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111579874.3A priority Critical patent/CN116342639A/en
Priority to PCT/CN2022/140826 priority patent/WO2023116792A1/en
Publication of CN116342639A publication Critical patent/CN116342639A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

The application belongs to video shooting technology, and relates to an image display method for video shooting, electronic equipment and a medium thereof. The method comprises the following steps: collecting a first initial image, and cutting out a first cutting image comprising a cutting target from the first initial image, wherein the cutting target has a first position relative to a first reference area of the first cutting image; cutting out a second cutting area from the second initial image to obtain and display a second cutting image comprising a cutting target positioned at a second position; and cutting out a third cutting area with a second cutting size from the acquired third initial image comprising the cutting target under the condition that the second position meets the preset cutting condition, so as to obtain and display a third cutting image. According to the method, the electronic equipment can track the position of the cutting target in the picture, the situation that a user needs to manually adjust the shooting angle of the electronic equipment frequently to aim at the cutting target is avoided, the picture shooting effect is improved, and better user experience is brought.

Description

Image display method, electronic device and medium
Technical Field
The application relates to video shooting technology. And more particularly, to an image display method, an electronic apparatus and a medium thereof.
Background
When a user shoots a person by using the electronic device, the user can manually adjust the shooting picture according to the size and the position of the person, so that the proportion between the person and the shooting picture is more coordinated. As shown in fig. 1a, the user places the mobile phone 100 on a stand to shoot a person 200, and live video. After the user opens the live broadcast application, as shown in fig. 1b, the user may manually adjust the shooting image 101 captured by the camera according to the position and the size of the face 201, for example: the user aims the mobile phone 100 at the face 201, enlarges the shooting picture 101 in a double-finger external drawing mode, and obtains and displays an adjusted clipping picture 102. In the clipping screen 102, the face 201 is located at the center position of the clipping screen; and the size of the face 201 and the size of the clipping frame 102 are more in proportion, so that the character 200 and the clipping frame 102 have expressive force, and the artistic effect of the live video is enhanced.
However, with the mobile phone 100 with the fixed shooting angle, if the character 200 moves down and right as shown in fig. 1c during the shooting of the character by the mobile phone 100, the character 200 deviates from the center position of the clipping screen 102, and the size of the face 201 also changes. In this case, the mobile phone 100 cannot track the movement of the person, keep the face always at the center of the clipping screen, and cannot update the clipping screen in real time according to the change in the size of the face. If the user is required to manually adjust the shooting angle of the mobile phone 100 frequently, the user can follow the movement of the character and keep the character always located at the center of the clipping picture, which results in complicated shooting process and poor user experience.
Disclosure of Invention
An object of the present application is to provide an image display method for video photographing, and an electronic apparatus and medium thereof.
A first aspect of the present application provides an image display method, applied to an electronic device, including:
acquiring a first initial image, wherein the first initial image comprises at least one cropping target, and the at least one cropping target comprises a first cropping target;
cutting out a first cutting area with a first cutting size from the first initial image to obtain and display a first cutting image comprising at least one cutting target, wherein the first cutting target has a first position relative to a first reference area corresponding to the first cutting size in the first cutting area, and the first position does not meet a preset cutting condition;
acquiring a second initial image comprising at least one cropping target;
clipping a second clipping region with a first clipping size from the second initial image to obtain and display a second clipping image comprising at least one clipping target, wherein the first clipping target has a second position relative to a first reference region corresponding to the first clipping size in the second clipping region;
and under the condition that the second position meets the preset clipping condition, clipping a third clipping region with a second clipping size from the acquired third initial image comprising at least one clipping target to obtain and display a third clipping image, wherein the first clipping target has a third position relative to a second reference region corresponding to the second clipping size in the third clipping region, and the third position does not meet the preset clipping condition.
Through the method, the electronic equipment can track the position of the cutting target (namely the shooting object) in the shooting picture, so that the single cutting target or the centers of a plurality of cutting targets are always located at the center of the shooting picture, the situation that a user needs to manually adjust the shooting angle of the electronic equipment frequently is avoided, the camera of the electronic equipment aims at the cutting target, the effect of the shooting picture is improved, and better user experience is brought.
In a possible implementation of the first aspect, the first clipping target includes a face.
In a possible implementation of the first aspect, the first clipping size is a ratio of a width and a height of the first clipping region to a width and a height of the first clipping target, respectively, and meets a first preset size ratio.
In a possible implementation of the first aspect, the first reference area coincides with a center point of the first clipping area, and the first reference area is located inside the first clipping area.
In a possible implementation of the first aspect, in a case where the first initial image includes only the first cropping target, the first cropping target is smaller in size than the first reference area, and
The preset cutting conditions comprise: at least a partial region of the cropping target is located outside the first reference region.
That is, in the embodiment of the present application, the first initial image may be a captured image acquired by a camera of the electronic device. The first cropping target may be a face box of the person in the first initial image. The first clipping size may be at least one scene included in a preset clipping rule, the size of the first clipping region is determined according to the first clipping size, and clipping is performed on the first initial image based on the first clipping region to obtain a first clipping image.
The first reference area (i.e., the first threshold area) may be a virtual area in which the electronic device is disposed within the first clipping area, and a predetermined ratio may be satisfied between a size of the first reference area and a size of the first clipping area, with a center point of the first reference area coinciding with a center point of the first clipping area. The position of the first reference area is fixed relative to the first crop area.
In the case where the first initial image includes only the first clipping target, the first position of the first clipping target may be a center position of the first clipping target in the first clipping region, for example, a center point of the first clipping target coincides with a center point of the first clipping region. The position of the first clipping target is variable relative to the first clipping region, in the event of a displacement of the first clipping target, i.e. the first clipping target moves from the first position to the second position. The electronic device needs to determine whether the first clipping target at the second position meets a preset clipping condition, where the preset clipping condition may be that the first clipping target is always inside the first reference area. If the partial area of the first clipping target (for example, at least one side edge of the first clipping target) at the second position exceeds the first reference area. The electronic device may determine a third crop area in the acquired third initial image according to the second crop size, in which third crop area the third position of the first crop target may be a center position of the first crop target in the third crop area, for example, a center point of the first crop target coincides with a center point of the third crop area. And the first clipping target is located within a second reference region (i.e., a second threshold region), the center point of the second reference region coinciding with the center point of the first clipping region.
In a possible implementation of the first aspect, the method further includes:
in the case that the first initial image includes a plurality of clipping targets, a fitting clipping target of the plurality of clipping targets is generated according to circumscribed rectangles of the plurality of clipping targets, wherein a center of the fitting clipping target coincides with a center of the circumscribed rectangle, and a size of the fitting clipping target is identical to a size of one clipping target of the plurality of clipping targets.
In one possible implementation of the first aspect, the fitting clipping target is smaller in size than the first reference area, and
the preset cutting conditions comprise: at least a portion of the region of the fit clipping target exceeds the first reference region.
In a possible implementation of the first aspect, the method further includes:
setting a third reference area comprising an circumscribed rectangle in the first clipping area, wherein the third reference area changes along with the change of the distance between a plurality of clipping targets; and preset clipping conditions, further comprising:
at least a partial region of the bounding rectangle exceeds the third reference region.
That is, in the embodiment of the present application, in the case where the first initial image includes a plurality of first clipping targets (i.e., a plurality of face frames), the circumscribed rectangle of the plurality of first clipping targets may be the smallest circumscribed rectangle of the first clipping targets. The fit clipping target may be a region formed around a center point of a circumscribed rectangle of the plurality of first clipping targets, and a size of the fit clipping target may be the same as at least one of the plurality of first clipping targets. At this time, the first clipping target located at the first position may be a fitting clipping target located at the first position. The preset clipping condition is satisfied, where the preset clipping condition may be that the fitting clipping target is always inside the first reference area.
The third reference area may be a virtual area of the electronic device disposed in the first clipping area, and the size of the third reference area and the size of the first clipping area may conform to a preset ratio, and a center point of the third reference area coincides with a center point of the first clipping area. The position of the third reference area is fixed relative to the first crop area.
In the case that the plurality of first clipping targets are displaced, that is, the fitting clipping targets corresponding to the plurality of first clipping targets are moved from the first position to the second position, if at least a part of the area of the circumscribed rectangle of the plurality of first clipping targets (for example, at least one side of the circumscribed rectangle) exceeds the third reference area, the electronic device may determine a third clipping area according to the second clipping size in the acquired third initial image, and in the third clipping area, the third position of the fitting clipping targets corresponding to the plurality of first clipping targets may be a center position of the fitting clipping target in the third clipping area.
A second aspect of the present application provides a readable medium having stored thereon instructions that, when executed on an electronic device, cause the electronic device to perform the image display method as provided in the foregoing first aspect.
A third aspect of the present application provides an electronic device, comprising:
a memory for storing instructions for execution by one or more processors of the electronic device, an
A processor, which is one of the processors of the electronic device, for performing the image display method as provided in the foregoing first aspect.
A fourth aspect of the present application provides a computer program product comprising: a non-transitory computer readable storage medium containing computer program code for performing the image display method as provided in the foregoing first aspect.
Drawings
FIG. 1a illustrates a scene graph for video capture using an electronic device, according to an embodiment of the present application;
fig. 1b to 1c are schematic diagrams illustrating an electronic device cropping a shot to obtain a cropping screen according to an embodiment of the present application;
FIG. 2a shows a schematic diagram of cropping a shot containing a shot to obtain a cropped view according to an embodiment of the present application;
fig. 2b shows a schematic diagram of cropping a shot containing a plurality of shot objects to obtain a cropped screen according to an embodiment of the present application;
fig. 2c to 2d are schematic diagrams showing a clipping picture obtained by clipping the photographed picture again after shifting the photographed object in the photographed picture according to the embodiment of the present application;
FIG. 3 is a schematic diagram showing the kinds of preset clipping rules according to an embodiment of the present application;
FIG. 4 illustrates a schematic diagram defining preset clipping rules according to an embodiment of the present application;
FIG. 5 shows a hardware architecture diagram of an electronic device, according to an embodiment of the present application;
fig. 6a to 6b are flowcharts illustrating an image display method according to an embodiment of the present application;
fig. 7 shows a schematic diagram of a shot screen including a plurality of shot objects according to an embodiment of the present application;
fig. 8a to 8e are schematic diagrams showing clipping of a photographed picture including a plurality of photographed objects to obtain a clipping picture according to an embodiment of the present application;
FIG. 9 shows a flow diagram of another image display method according to an embodiment of the present application;
fig. 10a to 10b are schematic diagrams showing another clipping method for clipping a photographed picture including a plurality of photographed objects to obtain a clipping picture according to an embodiment of the present application;
fig. 11 shows a flow chart of another image display method according to an embodiment of the present application.
Detailed Description
Embodiments of the present application include, but are not limited to, an image display method for video photographing, and an electronic device and medium thereof. For the purpose of making the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The method aims to solve the problems that the electronic equipment with a fixed shooting view angle described in the background technology cannot track the movement of the person and the shooting picture is cut in real time according to the position and the size change of the person after the movement. The embodiment of the application provides an image display method for video shooting, which comprises the following steps: the electronic equipment identifies the face in the shooting picture, determines the face area of the face in the shooting picture, takes the face area in the shooting picture as the center, cuts the shooting picture according to a preset cutting rule to obtain a cutting picture, and displays the cutting picture obtained after cutting on the screen of the electronic equipment. Meanwhile, a reference area is set for the face area in the clipping picture, wherein the reference area is used for determining whether the moving amplitude of the face area meets the moving clipping condition. Thus, if the electronic equipment detects that the moving range of the face area meets the moving cutting condition, cutting the shooting picture again.
It is understood that, in the embodiment of the present application, the image display method performed by the electronic device is different according to the number of people included in the photographed image, and the image display method performed by the electronic device on the photographed image including one or more faces is described below.
For example, fig. 2a shows a schematic diagram of a shot 101 including a person 200, and as shown in fig. 2a, the electronic device may determine a face frame 2001 of the person 200 as a face area, and crop the shot 101 according to a preset cropping rule with the face area as a center, to obtain a cropping screen 102. The electronic device may further set a reference area 104 with the center point of the clipping frame 102 as the center, where the width and the height of the reference area 104 are both greater than the width and the height of the face frame 2001, so that the face frame 2001 is located inside the reference area 104. If the person 200 moves (for example, moves leftwards), the face 2001 will also move, and if the electronic device detects that the face area corresponding to the face 2001 exceeds the reference area 104, the electronic device may acquire the face 2001 of the moved person 200 and re-clip the shooting screen 101.
For example, fig. 2b shows a schematic diagram of the photographed screen 101 including the person 201 and the person 202, and as shown in fig. 2b, the electronic device may first determine the circumscribed rectangle of the face frame 2011 of the person 201 and the face frame 2021 of the person 202, that is, the face frame 301. The electronic device takes the face outer frame 301 as a center, and cuts the shooting picture 101 according to a preset cutting rule to obtain a cutting picture 102; meanwhile, the electronic apparatus may also set the face region 302 using the height and width of the face frame 2011 or the face frame 2021 based on the center point of the face frame 301. Next, the electronic device sets a reference area 303 with the center point of the clipping frame 102 as the center, and the width and the height of the reference area 303 are both greater than those of the face area 302, so that the face area 302 is located inside the reference area 303. If the person 201 and/or the person 202 move (for example, the person 201 and the person 202 move leftwards at the same time), the face area 302 moves accordingly, and if the electronic device detects that the face area 302 exceeds the reference area 303, the electronic device can acquire the moved person 201, the face frame 2011 and the face frame 2021 of the person 202, and cut out the photographed screen 101 again.
It can be appreciated that in the embodiment of the present application, the face area is determined based on the position of the face of the person in the shot frame, so that the face area moves along with the movement of the face, and in the case that the face area does not exceed the reference area, the electronic device may consider that the face is still in the center position of the clipping frame, and may not need to re-clip the shot frame.
By the method, the electronic equipment can track the positions of the faces in the shooting picture, the centers of a single face or a plurality of faces are ensured to be positioned at the center of the cutting picture, the situation that a user needs to manually adjust the shooting angle of the electronic equipment frequently is avoided, and the cameras of the electronic equipment aim at the faces; the electronic equipment can also cut the shot picture according to the size of the face through a preset cutting rule, so that the picture presented by the electronic equipment or the shot video has expressive force between the figure and the picture, and the artistic effect of the picture or the video is enhanced.
With continued reference to fig. 2a, taking the electronic device as the mobile phone 100 as an example, the mobile phone 100 can identify the face frame 2001 of the person 200 in the shot image 101, and at the same time, the mobile phone 100 can also obtain the size of the face frame 2001, for example: the height h1 of the face frame 2001. The mobile phone 100 may crop the photographing screen 101 to obtain the cropping screen 102, so that the height H1 of the cropping screen 102 and the height H1 according to the face frame 2001 conform to a preset cropping rule, and the mobile phone 100 sets the face frame 2001 at the center position of the cropping screen 102. The mobile phone 100 may further set a reference area 104 formed around the center point of the clipping screen 102 in the clipping screen 102, so that the face frame 2001 may be located in the area formed by the reference area 104.
As shown in fig. 2c, if the character 200 moves to the right and downward, the mobile phone 100 detects that the face frame 2001 exceeds the reference area 104. As shown in fig. 2d, the mobile phone 100 may reacquire the position and size of the face frame 2001, for example: the height H2 of the face frame 2001, and clipping the shooting picture 101 again to obtain a clipping picture 102, wherein the height H2 of the clipping picture 102 accords with a preset clipping rule with the height H2 of the face frame 2001, and the face frame 2001 is positioned at the center of the clipping picture 102; meanwhile, the mobile phone 100 may also reset the reference area 104 in the clipping screen 102.
The preset clipping rule between the height of the clipping picture and the height of the face can be described by a scene, which refers to the difference of the range size of the shot object in the picture acquired by the shot object due to the different distances between the shooting equipment and the shot object. In the embodiment of the application, the electronic device can store five categories, namely a near view, news, a middle near view, a middle far view and a middle far view. Fig. 3 shows a preset display ratio between the height of the clipping frame and the height of the face for different scenes, for example: for close-up, a preset display ratio between the height of the face and the height of the crop screen is 0.4,0.178, which represents a preset display ratio between the distance from the upper edge of the crop screen to the upper edge of the face and the height of the crop screen.
For example, as shown in FIG. 4, for a screen size of 6 inches, the aspect ratio of the screen is 16:9, for the electronic device, the length of the screen of the electronic device is 13.5 cm, the width of the screen is 7.5 cm, if the electronic device detects that the height H1 of the face frame 2001 of the face 200 is 2 cm, in the case that the electronic device adopts a close-up, according to a preset display proportion corresponding to the close-up, the height H1 of the clipping frame 102 obtained by the electronic device is 5 cm, and the proportion of the width and the height conforms to the aspect ratio 16 of the screen of the electronic device: 9, the width W1 is 9 cm, and the distance y1 from the upper edge of the cut screen 102 to the upper edge of the face frame 2001 is 0.9 cm.
The electronic device in the embodiment of the present application is a terminal device having a video capturing function, for example, a common terminal device may include: vehicle-mounted devices, cell phones, tablet computers, notebook computers, palm top computers, mobile internet devices (mobile internet device, MID), wearable devices (including, for example, smart watches, smart bracelets, pedometers, etc.), personal digital assistants, portable media players, navigation devices, video game devices, set-top boxes, virtual reality and/or augmented reality devices, internet of things devices, industrial control devices, streaming media client devices, electronic books, reading devices, POS devices, and other devices. The embodiment of the present application will be described taking the mobile phone 100 as an example.
Fig. 5 shows a schematic hardware structure of a mobile phone 100 according to an embodiment of the present application, where the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the structure illustrated in the embodiments of the present invention is not limited to the specific embodiment of the mobile phone 100. In other embodiments of the present application, the handset 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components may be provided. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a Baseband Processor (BP), and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters.
The wireless communication function of the mobile phone 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied to the handset 100.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. applied to the handset 100.
The mobile phone 100 implements display functions through a GPU, a display 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. In some embodiments, the cell phone 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The mobile phone 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display 194, an application processor, and the like.
The camera 193 is used to capture still images or video. In some embodiments, the cell phone 100 may include 1 or N cameras 193, N being a positive integer greater than 1. In the embodiment of the present application, the mobile phone 100 can capture a captured image including a person through the camera 193.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capabilities of the handset 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area.
The handset 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The handset 100 may listen to music, or to hands-free calls, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the handset 100 is answering a telephone call or voice message, the voice can be received by placing the receiver 170B close to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. In the embodiment of the present application, the mobile phone 100 may acquire the voice of the person in the shooting screen through the microphone 170C.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194.
The gyro sensor 180B may be used to determine the motion gesture of the cell phone 100.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, the handset 100 calculates altitude from the barometric pressure value measured by the barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor.
The acceleration sensor 180E can detect the magnitude of acceleration of the mobile phone 100 in various directions (typically three axes).
A distance sensor 180F for measuring a distance.
The proximity light sensor 180G may include for example a Light Emitting Diode (LED) and a light detector,
the ambient light sensor 180L is used to sense ambient light level.
The fingerprint sensor 180H is used to collect a fingerprint.
The temperature sensor 180J is for detecting temperature.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen".
The bone conduction sensor 180M may acquire a vibration signal.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195 or removed from the SIM card interface 195 to enable contact and separation with the handset 100. S401: the Android side starts the navigation application 200.
The image display method of the mobile phone 100 of the present application will be described in detail with reference to fig. 6a and 6b based on the hardware configuration of the mobile phone 100 shown in fig. 5.
Specifically, the image cropping schemes in fig. 6a and 6b of the present application may be implemented by executing a related program by the processor 110 of the mobile phone 100, as shown in fig. 6a, and the image display method for the mobile phone 100 according to one embodiment of the present application includes the following steps.
S601: and starting a camera application to shoot video.
In this embodiment, as shown in fig. 1, the user may place the mobile phone 100 on the stand, start the camera application of the mobile phone 100, and take a video image in the face of the person 200. It will be appreciated that the user may take video shots of a plurality of persons, and in this case, the shot images taken by the camera of the mobile phone 100 may include a plurality of persons.
In another embodiment of the present application, after the user places the mobile phone 100 in the stand, the user may also start a live video application, where the video conference application performs live video or video conference on the person 200.
S602: and carrying out face detection on the shooting picture, and judging whether the face is detected.
In the embodiment of the present application, the mobile phone 100 may perform face detection on each shot of video, that is, each frame of video. The mobile phone 100 may set a face frame (herein, the face frame may also be referred to as a region of interest (Region of Interest, ROI)) for the detected face, where the face frame may be a minimum bounding rectangle of the face, that is, four sides of the face frame may be tangent to upper, lower, left and right portions of the face, respectively.
In this embodiment of the present application, if the photographing frame includes a plurality of faces, as shown in fig. 7, the mobile phone 100 detects that the photographing frame 101 includes a face 201 and a face 202, then the mobile phone 100 may set a face frame, such as a face frame 2011 and a face frame 2021, for each face. The mobile phone 100 may employ, for example: the YOLO (You Only Look Once) algorithm performs face detection on the photographed screen 101. The YOLO algorithm here is an object detection algorithm that detects a face from a photographed picture through a convolutional neural network (CNN, convolutional Neural Networks).
In the embodiment of the present application, if the mobile phone 100 determines that a face exists in the shot screen, the mobile phone 100 executes step S603, and the mobile phone 100 obtains the size of the face and the position of the face, and determines the clipping region using the size of the face and the position of the face. Otherwise, the mobile phone 100 returns to step S602, and the mobile phone 100 continues to perform face detection on the shot image, and at the same time, the screen of the mobile phone 100 may continue to display the shot image acquired by the camera. If the mobile phone 100 is shooting a video, the mobile phone 100 generates a video using a shooting picture, and if the mobile phone 100 is live-broadcasting a video, the shooting picture is taken as a live-broadcasting picture.
In other embodiments of the present application, the mobile phone 100 may further employ: the optical flow method or continuous adaptive algorithm (Continuously Adaptive Mean Shift algorithm, camShift) tracks the face in the photographed image 101, and further acquires the position of the face in the photographed image 101 in real time.
S603: the size of the face and the position of the face are obtained.
In this embodiment of the present application, the size of the face may be the width and the height of a face frame corresponding to the face in the shot frame, and the position of the face may be the position of an endpoint of the face frame corresponding to the face in each shot frame of the video. With continued reference to fig. 7, the face 201 in fig. 7 may have a height h1 and a width w1.
S604: and determining a clipping region according to the size of the face and the position of the face.
In the embodiment of the present application, the mobile phone 100 may determine the clipping area according to the size of the face and the position of the face, in combination with Jing Bie shown in fig. 3. The above-described procedure may be implemented by the mobile phone 100 executing steps S604a to S604f shown in fig. 6 b.
S604a: and determining the outer frame of the human face.
In this embodiment of the present application, the face frame may be the minimum bounding rectangle of the face frames corresponding to all the faces detected in step S602. If only one face is in the shooting picture, the face frame can be identical to the face frame of the face. As shown in fig. 8a, the photographed screen 101 includes a face 201 and a face 202. The face frames of the face 201 and the face 202 are a face frame 2011 and a face frame 2021, respectively. The face bezel 301 may be a minimum bounding rectangle of the face frame 2011 and the face frame 2021.
In the embodiment of the present application, as shown in fig. 8a, the width of the face frame 301 may be the distance between the left side edge of the face frame 2011 and the right side edge of the face frame 2021, and the height of the face frame 301 may be the distance between the lower side edge of the face frame 2011 and the upper side edge of the face frame 2021.
In this embodiment of the present application, the mobile phone 100 may further select, from face frames corresponding to all detected faces, a face frame with a highest position of the face frame in a shot image, and obtain a width and a height of the face frame. Such as the face box 2011 shown in fig. 8 b. The mobile phone 100 draws a face area 302 based on the center point of the face frame 301 by using the width and the height of the face frame 2011 with the highest position in the obtained shooting picture, so that the size of the face area 302 is consistent with the size of the face frame 2011 with the highest position in the shooting picture. It can be appreciated that the center point of the face area 302 may coincide with the center point of the face outer frame 301, and when the face moves, the face outer frame 301 and the face area 302 also move along with the face; when the size of the face changes, the face frame 301 and the face area 302 also change.
S604b: the height of the cropped area is determined.
In this embodiment of the present application, the mobile phone 100 may be based on the category of the scene shown in fig. 3 according to the height of the face with the highest position in the shot image, for example: near, news, middle near, middle far, and calculating the height of the clipping region corresponding to the height of the face. For example: as shown in fig. 8c, the mobile phone 100 may first use a preset display scale corresponding to the close-up shown in fig. 3, for example: 0.4, the height H1 of the clipping region 103 is calculated according to the height H1 of the face frame 2011 of the face 201, that is, h1=h1/0.4, for example: the height H1 of the face frame 2011 is 2 cm, and the height H1 of the clipping region 103 is 5 cm. Meanwhile, the mobile phone 100 may determine whether the obtained height of the clipping region 103 exceeds the height of the photographed image, and whether the height of the face outer frame 301 exceeds the height of the clipping region 103, if at least one of the two exceeds the height of the clipping region 103, the mobile phone 100 may calculate the height of the clipping region 103 again according to the height of the face using the preset display ratio corresponding to the news according to the order Jing Bie until the height of the clipping region 103 obtained by the mobile phone 100 is smaller than the height of the photographed image, and the height of the face outer frame is smaller than the height of the clipping region 103.
In this embodiment of the present application, if the mobile phone 100 traverses the scene shown in fig. 3, the mobile phone 100 obtains the clipping area 103 with a height exceeding the height of the shooting picture according to the preset display ratio corresponding to the height of the face and the scene, and at this time, the mobile phone 100 may not clip the shooting picture, but continue to use the size of the shooting picture.
In this embodiment of the present application, if the distance from the upper edge of the clipping area 103 to the upper edge of the face frame 2011 (i.e., the face frame with the highest position in the shot frame) is smaller than 0 after the mobile phone 100 traverses the scene shown in fig. 3, the mobile phone 100 may continue to use the size of the shot frame without clipping the shot frame.
In this embodiment of the present application, if the mobile phone 100 determines that one scene shown in fig. 3 is still present, if the distance from the upper edge of the clipping area 103 to the upper edge of the face frame 2011 (i.e., the face frame with the highest position in the shot frame) does not meet the setting of the scene, the mobile phone 100 may set the clipping area according to only the preset display ratio between the height of the face and the height of the clipping frame, and the distance from the upper edge of the clipping area 103 to the upper edge of the face frame 2011 may not be considered.
S604c: the width of the clipping region is determined.
In the embodiment of the present application, with continued reference to fig. 8c, the mobile phone 100 may be configured according to the aspect ratio of the screen of the mobile phone 100, for example: 16:9, and the height H1 of the trimming area 103 acquired by step S604b, the width W1 of the trimming area 103 is determined. This ensures that the aspect ratio of the cropped area 103 matches the aspect ratio of the screen of the handset 100, making the cropped area 103 more natural in the screen of the handset 100 without causing an imbalance in the ratio of the width to the height of the cropped area 103.
In this embodiment, the mobile phone 100 may further preset a preset ratio threshold (e.g., 0.8) between the face frame and the clipping area. If the mobile phone 100 determines that the ratio of the width of the face frame to the width of the clipping region is greater than 0.8, the mobile phone 100 determines that the width of the clipping region is too small, and the mobile phone 100 can adjust the width of the clipping region based on a preset adjustment ratio threshold (e.g., 0.79) by adjusting the width of the face frame/the preset adjustment ratio threshold (0.79); the mobile phone 100 may further adjust the width of the clipping region according to the aspect ratio of the screen of the mobile phone 100.
S604d: and aligning the center point of the clipping region with the center point of the face outer frame.
In this embodiment of the present application, the mobile phone 100 uses the center point of the face frame as the center point of the clipping area to clip the shooting picture by aligning the center point of the clipping area with the center point of the face frame, so that the person in the shooting picture can be located at the center position close to the center point in the clipping picture. The mobile phone 100 can more prominently display the person in the photographed video in the crop screen.
S604e: a first threshold region is set for the clipping region.
In this embodiment of the present application, the first threshold area may refer to a reference area, where the first threshold area is used to determine whether the distance of the person moving in the clipping area exceeds the first threshold area, where the first threshold area may also be referred to as the reference area, as shown in fig. 8d, where the center point of the first threshold area 303 coincides with the center point of the clipping area 103, and whether the edge of the face area 302 set by the mobile phone 100 in step S604a exceeds the first threshold area 303. If the position of the person in the clipping region 103 is beyond the central position of the clipping region 103, the mobile phone 100 can clip the shot picture again, so that the position of the person is positioned in the central position of the clipping region again, that is, the mobile phone 100 can track the movement of the person, and the shot picture can be adjusted in real time.
In this embodiment of the present application, table 1 shows the values of the first threshold area corresponding to the scene, where "left and right", "up" and "down" respectively indicate the left and right edge distance and the upper and lower edge distance of the first threshold area (that is, the ratio of the edge distance from the edge of the first threshold area to the edge distance of the clipping area to the width or height of the clipping area, that is, the first threshold area is located inside the clipping area), and the categories of the scenes shown in table 1 may be identical to the categories of the scene shown in fig. 3. That is, the mobile phone 100 may set the first threshold area of the clipping area by looking up the corresponding value from table 1 according to the scene adopted when setting the clipping area. For example: if the height of the clipping region is set by the mobile phone 100 according to the preset display proportion of the close range, the mobile phone 100 may set the first threshold region of the clipping region according to the value corresponding to the close range in table 1.
Jing Bie (left and right) Upper part Lower part(s)
Close-up view 0.3 0.1 0.3
News 0.32 0.1 0.3
Middle and close range 0.33 0.1 0.3
Middle view (Cowboy lens) 0.34 0.1 0.3
Middle and long range view 0.35 0.1 0.3
TABLE 1
S604f: a second threshold region is set for the clipping region.
In this embodiment of the present application, if the person in the clipping area moves, the position of the face frame corresponding to the person in the clipping area changes along with the movement, and then the width or the height of the face frame determined by the mobile phone 100 through step S604a also changes, as shown in fig. 8e, the center point of the second threshold area 304 also coincides with the center point of the clipping area 103, where the second threshold area 304 is used to determine whether the edge of the face frame exceeds the second threshold area 304, and if so, it indicates that the occupied space of the person in the clipping area is too large, so that the background in the clipping area is too small, the mobile phone 100 can clip the shot image again, so that the person in the clipping area and the background conform to the proportion.
In the embodiment of the present application, the mobile phone 100 may set the left-right margin and the upper-lower margin of the second threshold region to 0.1, that is (the ratio of the edge distance of the edge of the second threshold region to the edge distance of the clipping region to the width or height of the clipping region). It is understood that the first threshold region is located inside the second threshold region.
S605: and cutting the shot picture by using the cutting area to obtain a cutting picture.
In this embodiment of the present application, after the size of the clipping area is determined by the mobile phone 100, the mobile phone 100 may clip the captured image collected by the camera with the center of the face frame as the center of the clipping area to obtain a clipping image, and then the clipping image may be displayed on the screen of the mobile phone 100, if the mobile phone 100 is capturing video, the mobile phone 100 uses the clipping image to generate video, and if the mobile phone 100 is performing live video, the clipping image is used as a live video. It can be understood that the clipping picture is displayed by the mobile phone 100 and the picture is captured by the camera of the mobile phone 100.
It can be understood that, in the embodiment of the present application, the sequence of steps S604e, S604f and S605 is not executed, and the mobile phone 100 may cut the shot image 101 according to the size of the cut area 103, and after obtaining the cut image, set a first threshold area and a second threshold area for the cut image.
In embodiments of the present application, the values included in the image cropping schemes described in fig. 6a and 6b above are exemplary, and in embodiments of the present application, other values may be used for representation. For the image clipping schemes described by fig. 6a and 6b, the face in the clipping frame may be located at the center of the clipping frame, so that the face is more prominent in the clipping frame and has better expressive power; the size of the face and the size of the cutting picture also accord with a preset cutting rule, namely the scene is different, so that the cutting picture has more aesthetic feeling; meanwhile, through the image clipping scheme, the first threshold area and the second threshold area are further set for the clipping picture, so that the mobile phone 100 can detect the position of the face in real time, and under the condition that the position of the face does not meet the first threshold area and the second threshold area, the mobile phone 100 can clip the shooting picture again. The following describes a solution for re-cropping a shot after the mobile phone 100 detects that a face has moved, by using fig. 9.
It can be understood that after the mobile phone 100 determines the clipping frame through the shooting frame collected by the camera, the person in the clipping frame moves, and the mobile phone 100 needs to determine the clipping frame again. An image display method for the mobile phone 100 is described below with reference to fig. 9. As shown in fig. 9, the image display method for the mobile phone 100 according to one embodiment of the present application includes the following steps.
S901: video is shot.
In the embodiment of the present application, the user may start the camera application of the mobile phone 100, and perform video shooting or live video broadcasting in the face of the person 200.
S902, judging whether the face mutation occurs.
In this embodiment of the present application, during the process of continuously performing video shooting or live video broadcasting by the mobile phone 100, the mobile phone 100 may further determine whether a face mutation occurs, where the face mutation refers to that during the process of performing face detection on a shooting picture by the mobile phone 100 using the optical flow method or the continuous adaptive algorithm described in step S602, the mobile phone 100 fails to detect a face in the shooting picture due to an abnormality of the algorithm, but in fact, the person does not leave the camera of the mobile phone 100. Causing the mobile phone 100 to misunderstand that the person moves away from the camera of the mobile phone 100, resulting in the mobile phone 100 failing to recognize the face from the shot picture acquired by the camera. If the mobile phone 100 determines that the face mutation occurs, the mobile phone 100 executes step S904, and the mobile phone 100 continues to detect whether the face in the shot picture moves, so that the mobile phone 100 needs to re-determine the cut picture; otherwise, the mobile phone performs step S903, and the mobile phone 100 may directly display the shot screen or save the shot screen as a video.
In order to avoid that the mobile phone 100 misconsiders the shot picture with the lost face as the shot picture without the face, the mobile phone 100 directly displays the shot picture or saves the shot picture as a video without cutting the shot picture.
S903, displaying a shooting picture.
In this embodiment of the present application, when the mobile phone 100 determines that the photographed image acquired by the camera of the mobile phone 100 does not include a face, the method includes: the person leaves the shooting range of the camera of the mobile phone 100, or no person exists in the shooting range of the camera of the mobile phone 100, the mobile phone 100 does not cut the shooting picture, the shooting picture can be displayed in the screen of the mobile phone 100, if the mobile phone 100 shoots a video, the mobile phone 100 uses the shooting picture to generate the video, and if the mobile phone 100 performs live video, the shooting picture is taken as a live video.
S904 determines whether the face region exceeds a first threshold region.
In this embodiment of the present application, if the person in the shot image moves, for example, the person moves to the right, so that the face area exceeds the first threshold area, the mobile phone 100 executes step S906, and the mobile phone 100 determines and displays the cut image again according to the size of the moved face and the position of the face. If not, the mobile phone 100 performs step S905, and the mobile phone 100 determines whether the face frame exceeds the second threshold region.
As shown in fig. 10a, as the face 201 and the face 202 move to the right, the face frames 2011 and 2021 also move to the right, so that the face outer frame 301 forming the minimum circumscribed rectangle of the face frames 2011 and 202 and the face region 302 overlapping the center point of the face outer frame 301 also move to the right. The center point of the first threshold area 303 coincides with the center point of the clipping area 103, that is, the first threshold area 303 remains stationary while the face outer frame 301 moves to the right. When the mobile phone 100 determines that the face area 302 exceeds the first threshold area 303, that is, when the face area 302 moves to the right along with the face outer frame 301, if the right side edge of the face area 302 exceeds the right side edge of the first threshold area 303, that is, the right side edge of the face area 302 exceeds the right side edge of the first threshold area 303. The face 201 and the face 202 are illustrated to deviate from the center position in the clipping screen, and the face 201 and the face 202 are too far to the right in the clipping screen.
S905, determining whether the face outer frame exceeds a second threshold area.
In this embodiment of the present application, if the characters in the shot frames move, for example, the characters move reversely, so that after the outer frame of the face becomes larger, the second threshold area is exceeded, the mobile phone 100 executes step S906, and the mobile phone 100 determines the clipping frame again and displays the clipping frame according to the size of the moved face and the position of the face. If not, the mobile phone 100 returns to step S903, and the mobile phone 100 continues to display the photographed image.
As shown in fig. 10b, as the face 201 moves to the right and the face 202 moves to the left, the face frames 2011 and 2021 also move to the right and the right, respectively, so that the right and left sides of the face frame 301 constituting the minimum circumscribed rectangle of the face frames 2011 and 202 move to the right and the left, respectively, and the width of the face frame 301 becomes wider. The center point of the second threshold area 304 coincides with the center point of the clipping area 103, that is, the first threshold area 303 remains unchanged while the width of the face outer frame 301 is widened. In the process that the mobile phone 100 determines that the face outer frame 301 exceeds the second threshold area 304, that is, in the process that the right side edge and the left side edge of the face outer frame 301 move to the right side and the left side respectively, if the right side edge of the face outer frame 301 exceeds the right side edge of the second threshold area 304, or the left side edge of the face outer frame 301 exceeds the left side edge of the second threshold area 304, or both of the two exceeds the left side edge, the mobile phone 100 determines that the size of the clipping frame is too small, and the mobile phone 100 needs to re-determine the clipping frame.
S906: and determining a clipping picture according to the size of the face and the position of the face.
In the embodiment of the present application, in the case where the mobile phone 100 determines that the face in the shot frame moves, the mobile phone 100 may determine the clipping frame again according to the size of the moved face and the position of the face, and display the clipping frame, using a method similar to step S604 of fig. 6 a.
In embodiments of the present application, the values included in the image cropping scheme described in fig. 9 above are exemplary, and in embodiments of the present application, other values may be used for representation.
Fig. 6a to 6b and fig. 9 described above describe an image display method for the mobile phone 100. Through the scheme, the mobile phone 100 can cut a shooting picture containing a human face according to a preset cutting rule, the size of the cutting picture is ensured to be in accordance with the size of the human face, so that the human face has expressive force in the cutting picture, meanwhile, the mobile phone 100 can also detect the change (position change and size change) of the human face in the shooting picture in real time, after the mobile phone 100 determines that the change of the human face exceeds a set threshold area, the mobile phone 100 can cut the shooting picture again according to the changed human face, so that a person in a video shot by the mobile phone 100 can be always located in the center of the picture, and the aesthetic feeling between a person and the picture is maintained.
Another image display method according to an embodiment of the present application is described below by referring to fig. 11, and further includes: after the mobile phone 100 acquires a shooting picture through the camera, the mobile phone 100 detects that the shooting picture contains a human face, the mobile phone 100 determines a target human face emitting the human voice in the shooting picture according to the human voice acquired by the microphone, and based on the position and the size of the target human face, the shooting picture is cut to obtain a cutting picture and displayed. An image display method for the mobile phone 100 is described below by way of fig. 11. As shown in fig. 11, the image display method for the mobile phone 100 according to one embodiment of the present application includes the following steps.
S1101: and starting a camera application to shoot video.
In this embodiment, step S1101 is similar to step S601 of fig. 6a, and the user may place the mobile phone 100 on the stand as shown in fig. 1, start the camera application of the mobile phone 100, and take a video shot or live video in the face of the person 200.
S1102: and carrying out face detection on the shooting picture, and judging whether the face is detected.
In the embodiment of the present application, step S1102 is similar to step S602 of fig. 6a, where the mobile phone 100 may perform face detection on each shot of video, that is, each frame of video.
In the embodiment of the present application, if the mobile phone 100 determines that a face exists in the shot screen, the mobile phone 100 performs step S1103, and the mobile phone 100 collects the voice through the microphone and determines the position of the face emitting the voice in the shot screen. Otherwise, the mobile phone 100 returns to step S1102, and the mobile phone 100 continues to perform face detection on the shot picture.
S1103: and collecting the voice, and determining the face giving out the voice in the shooting picture.
In the embodiment of the present application, the mobile phone 100 may collect the sound of the current environment through the microphone, and obtain the voice from the sound of the current environment. After the mobile phone 100 obtains the voice, the position of the person emitting the voice in the current environment can be determined through an Angle-of-Arrival (AOA) algorithm according to the voice intensity of the voice and the time of the voice reaching the mobile phone 100, and the position of the person in the current environment is projected to a shooting picture to obtain the face emitting the voice in the shooting picture.
It will be appreciated that the mobile phone 100 may also collect human voice through hardware devices such as a microphone array and left and right channel speakers connected to the mobile phone 100.
S1104: the size of the face and the position of the face are obtained.
In the embodiment of the present application, step S1104 is similar to step S603 of fig. 6a, and it can be understood that if the mobile phone 100 determines that there are multiple faces making a voice in the shot, the mobile phone 100 may obtain the sizes and positions of the multiple faces.
S1105: and determining a clipping picture according to the size of the face and the position of the face.
In the embodiment of the present application, step S1105 is similar to step S604 of fig. 6a, and the mobile phone 100 may determine the clipping frame according to the size of the face and the position of the face, in combination with Jing Bie shown in fig. 3. It can be understood that if the mobile phone 100 determines that there are a plurality of faces making a voice in the shot picture, the mobile phone 100 can obtain the sizes and positions of the plurality of faces; if the mobile phone 100 determines that only one face emitting a sound exists in the shooting picture, the mobile phone 100 can use a face frame of the face as a face frame, and cut the shooting picture according to the size and the position of the face frame to obtain a cut picture.
In the embodiment of the present application, in the image clipping scheme described in fig. 11, if the characters in the photographed image are in a dialogue with each other, the mobile phone 100 may clip the photographed image in real time according to the collected voice. And obtaining cutting pictures with different vision.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various features, these features should not be limited by these terms. These terms are used merely for distinguishing and are not to be construed as indicating or implying relative importance. For example, a first feature may be referred to as a second feature, and similarly a second feature may be referred to as a first feature, without departing from the scope of the example embodiments.
Furthermore, various operations will be described as multiple discrete operations, in a manner that is most helpful in understanding the illustrative embodiments; however, the order of description should not be construed as to imply that these operations are necessarily order dependent, and that many of the operations be performed in parallel, concurrently or with other operations. Furthermore, the order of the operations may also be rearranged. When the described operations are completed, the process may be terminated, but may also have additional operations not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
References in the specification to "one embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature is described in connection with a particular embodiment, it is within the knowledge of one skilled in the art to affect such feature in connection with other embodiments, whether or not such embodiment is explicitly described.
The terms "comprising," "having," and "including" are synonymous, unless the context dictates otherwise. The phrase "A/B" means "A or B". The phrase "a and/or B" means "(a), (B) or (a and B)".
As used herein, the term "module" may refer to, be part of, or include: a memory (shared, dedicated, or group) for running one or more software or firmware programs, an Application Specific Integrated Circuit (ASIC), an electronic circuit and/or processor (shared, dedicated, or group), a combinational logic circuit, and/or other suitable components that provide the described functionality.
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering is not required. Rather, in some embodiments, these features may be described in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or methodological feature in a particular drawing does not imply that all embodiments need to include such feature, and in some embodiments may not be included or may be combined with other features.
The embodiments of the present application have been described in detail above with reference to the accompanying drawings, but the application of the technical solution of the present application is not limited to the applications mentioned in the embodiments of the present application, and various structures and modifications can be easily implemented with reference to the technical solution of the present application, so as to achieve the various beneficial effects mentioned herein. Various changes, which may be made by those of ordinary skill in the art without departing from the spirit of the present application, are intended to be covered by the claims herein.

Claims (11)

1. An image display method applied to an electronic device, comprising:
acquiring a first initial image, wherein the first initial image comprises at least one cropping target, and the at least one cropping target comprises a first cropping target;
clipping a first clipping region with a first clipping size from the first initial image to obtain and display a first clipping image comprising the at least one clipping target, wherein the first clipping target has a first position relative to a first reference region corresponding to the first clipping size in the first clipping region, and the first position does not meet a preset clipping condition;
Acquiring a second initial image comprising the at least one cropping target;
clipping a second clipping region with a first clipping size from the second initial image to obtain and display a second clipping image comprising the at least one clipping target, wherein the first clipping target has a second position relative to the first reference region of the second clipping region corresponding to the first clipping size;
and under the condition that the second position meets a preset clipping condition, clipping a third clipping region with a second clipping size from the acquired third initial image comprising the at least one clipping target to obtain and display a third clipping image, wherein the first clipping target has a third position relative to a second reference region corresponding to the second clipping size in the third clipping region, and the third position does not meet the preset clipping condition.
2. The method of claim 1, wherein the first clipping target comprises a human face.
3. The method of claim 1, wherein the first clipping size is a ratio of a width and a height of the first clipping region to a width and a height of the first clipping target, respectively, conforming to a first preset size ratio.
4. The method of claim 1, wherein the first reference region coincides with a center point of the first crop region and the first reference region is located inside the first crop region.
5. The method of claim 4, wherein in the case where the first initial image includes only the first cropping target, the first cropping target is smaller in size than the first reference region, and
the preset cutting conditions comprise: at least a partial region of the clipping target exceeds the first reference region.
6. The method as recited in claim 4, further comprising:
in the case that the first initial image includes a plurality of clipping targets, a fitting clipping target of the plurality of clipping targets is generated according to circumscribed rectangles of the plurality of clipping targets, wherein a center of the fitting clipping target coincides with a center of the circumscribed rectangle, and a size of the fitting clipping target is the same as a size of one clipping target of the plurality of clipping targets.
7. The method of claim 6, wherein the fit clipping target is smaller in size than the first reference region, and
The preset cutting conditions comprise: at least a partial region of the fit clipping target is located outside the first reference region.
8. The method as recited in claim 6, further comprising:
setting a third reference area comprising the circumscribed rectangle in the first clipping area, wherein the third reference area changes along with the change of the distance between the clipping targets; and the preset clipping conditions further include:
at least a partial area of the circumscribed rectangle exceeds the third reference area.
9. A readable medium having stored thereon instructions which, when executed on an electronic device, cause the electronic device to perform the image display method of any of claims 1-8.
10. An electronic device, comprising:
a memory for storing instructions for execution by one or more processors of the electronic device, an
A processor, being one of the processors of an electronic device, for performing the image display method of any one of claims 1-8.
11. A computer program product, comprising: a non-transitory computer readable storage medium containing computer program code for performing the image display method of any one of claims 1 to 8.
CN202111579874.3A 2021-12-22 2021-12-22 Image display method, electronic device and medium Pending CN116342639A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111579874.3A CN116342639A (en) 2021-12-22 2021-12-22 Image display method, electronic device and medium
PCT/CN2022/140826 WO2023116792A1 (en) 2021-12-22 2022-12-21 Image display method, electronic device thereof, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111579874.3A CN116342639A (en) 2021-12-22 2021-12-22 Image display method, electronic device and medium

Publications (1)

Publication Number Publication Date
CN116342639A true CN116342639A (en) 2023-06-27

Family

ID=86874710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111579874.3A Pending CN116342639A (en) 2021-12-22 2021-12-22 Image display method, electronic device and medium

Country Status (2)

Country Link
CN (1) CN116342639A (en)
WO (1) WO2023116792A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
US11462009B2 (en) * 2018-06-01 2022-10-04 Apple Inc. Dynamic image analysis and cropping
CN112446255A (en) * 2019-08-31 2021-03-05 华为技术有限公司 Video image processing method and device
CN111031178A (en) * 2019-12-19 2020-04-17 维沃移动通信有限公司 Video stream clipping method and electronic equipment
CN113536866A (en) * 2020-04-22 2021-10-22 华为技术有限公司 Character tracking display method and electronic equipment
CN113763242A (en) * 2021-05-17 2021-12-07 腾讯科技(深圳)有限公司 Image processing method and device and computer readable storage medium

Also Published As

Publication number Publication date
WO2023116792A1 (en) 2023-06-29

Similar Documents

Publication Publication Date Title
CN110502954B (en) Video analysis method and device
WO2021008456A1 (en) Image processing method and apparatus, electronic device, and storage medium
KR101874895B1 (en) Method for providing augmented reality and terminal supporting the same
CN108924412B (en) Shooting method and terminal equipment
CN111784614A (en) Image denoising method and device, storage medium and electronic equipment
CN111243105B (en) Augmented reality processing method and device, storage medium and electronic equipment
CN110266957B (en) Image shooting method and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
CN111161176B (en) Image processing method and device, storage medium and electronic equipment
CN110661971A (en) Image shooting method and device, storage medium and electronic equipment
CN113038165B (en) Method, apparatus and storage medium for determining encoding parameter set
CN112581358A (en) Training method of image processing model, image processing method and device
CN110807769B (en) Image display control method and device
CN113573120B (en) Audio processing method, electronic device, chip system and storage medium
CN111586413A (en) Video adjusting method and device, computer equipment and storage medium
CN111083513A (en) Live broadcast picture processing method and device, terminal and computer readable storage medium
CN112419143A (en) Image processing method, special effect parameter setting method, device, equipment and medium
CN111432154B (en) Video playing method, video processing method and electronic equipment
CN112738606A (en) Audio file processing method and device, terminal and storage medium
CN108234888B (en) Image processing method and mobile terminal
CN116342639A (en) Image display method, electronic device and medium
CN113301444B (en) Video processing method and device, electronic equipment and storage medium
CN111127539B (en) Parallax determination method and device, computer equipment and storage medium
CN109379531B (en) Shooting method and mobile terminal
CN113747057A (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination