CN117480772A - Video display method and device, terminal equipment and computer storage medium - Google Patents

Video display method and device, terminal equipment and computer storage medium Download PDF

Info

Publication number
CN117480772A
CN117480772A CN202280004304.8A CN202280004304A CN117480772A CN 117480772 A CN117480772 A CN 117480772A CN 202280004304 A CN202280004304 A CN 202280004304A CN 117480772 A CN117480772 A CN 117480772A
Authority
CN
China
Prior art keywords
target
picture
frame
cutting frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280004304.8A
Other languages
Chinese (zh)
Inventor
彭聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Publication of CN117480772A publication Critical patent/CN117480772A/en
Pending legal-status Critical Current

Links

Abstract

The disclosure provides a video display method and device, a terminal device and a computer storage medium. The video display method is applied to the terminal equipment and comprises the following steps: acquiring an initial cutting frame when video acquisition is performed (S101); based on the initial cutting frame, cutting a preview picture acquired when the terminal equipment starts the camera (S102); the cut screen is displayed (S103).

Description

Video display method and device, terminal equipment and computer storage medium Technical Field
The present disclosure relates to the field of wireless communications and video communications, and in particular, to a video display method and apparatus, a terminal device, and a computer storage medium.
Background
With the continued development of the electronic product industry, many functions of the intelligent terminal are also designed, developed and put into use. In the field of wireless communication, intelligent terminals such as mobile phones and the like can be used as communication tools, so that not only can the voice call demands of people be met, but also video calls, video conferences and the like can be supported.
Currently, video calls and video conferences carry very many work and communication scenarios for contemporary people. Moreover, with the influence of epidemic situation, the time of home and office becomes more and more, and video conference and video call are more and more frequently used. In a video conference or video call scene, the intelligent terminal performs video acquisition through a camera and displays the video acquisition on a display screen or synchronizes to a communication user through a cloud server. However, when a user uses an intelligent terminal to conduct a video conference or a video call, the size of a video picture cannot be adjusted, the privacy of the user is easily exposed in the view range of the camera, and the use experience of the user is affected.
Disclosure of Invention
The embodiment of the disclosure provides a video display method and device, terminal equipment and a computer storage medium.
A first aspect of an embodiment of the present disclosure provides a video display method, applied in a terminal device, where the method includes:
acquiring an initial cutting frame when video acquisition is performed;
based on the initial cutting frame, cutting a preview picture acquired when the terminal equipment starts a camera;
and displaying the cut picture.
In some embodiments, the method further comprises:
under the condition that the terminal equipment moves, a target cutting frame is obtained based on the initial cutting frame; the frame selection content of the initial cutting frame and the frame selection content of the target cutting frame comprise the same target object;
acquiring a target video picture based on the target cutting frame and a target acquisition picture acquired after the terminal equipment moves;
the displaying the cut picture comprises the following steps:
and displaying the target video picture.
In some embodiments, the obtaining a target crop box based on the initial crop box includes:
acquiring motion information of the terminal equipment during motion;
and adjusting the initial cutting frame based on the motion information to obtain the target cutting frame.
In some embodiments, the motion information includes: a movement distance of the terminal device relative to the target object and/or a rotation angle of the terminal device relative to the target object;
the adjusting the initial cutting frame based on the motion information to obtain the target cutting frame includes:
based on the moving distance, adjusting the size of the initial cutting frame to obtain the target cutting frame; and/or
And adjusting the position of the initial cutting frame based on the rotation angle to obtain the target cutting frame.
In some embodiments, the obtaining a target video frame based on the target crop frame and a target acquisition frame acquired after the movement of the terminal device includes:
covering the picture outside the target cutting frame, and taking the covered picture and the picture in the target cutting frame as the target video picture;
or,
cutting the target acquisition picture based on the target cutting frame to obtain a cutting picture; and amplifying the clipping picture to obtain the target video picture.
In some embodiments, the obtaining an initial crop box includes:
Determining the initial cutting frame based on a moving track acted on the preview picture;
or,
and carrying out target detection on the preview picture, and taking a display frame corresponding to the detected target object as the initial cutting frame.
In some embodiments, the obtaining a target video frame based on the target crop frame and a target acquisition frame acquired after the movement of the terminal device includes:
and under the condition that the target cutting frame part is positioned in the target acquisition picture, reducing the size of the target cutting frame, and obtaining the target video picture based on the reduced cutting frame.
In some embodiments, the displaying the cropped picture includes:
determining whether a sensitive object exists in the cut picture;
and under the condition that the cut picture has the sensitive object, blurring processing is carried out on the sensitive object or the picture where the sensitive object is positioned is cut.
In some embodiments, the displaying the cropped picture includes:
correcting the corrected picture, and displaying the corrected picture.
A second aspect of an embodiment of the present disclosure provides a video display apparatus, where the video display apparatus is applied to a terminal device, where the apparatus includes:
The acquisition module is configured to acquire an initial cutting frame during video acquisition;
the clipping module is configured to clip the preview picture acquired when the terminal equipment starts the camera based on the initial clipping frame;
and the display module is configured to display the cut picture.
In some embodiments, the apparatus further comprises:
the adjusting module is configured to obtain a target cutting frame based on the initial cutting frame under the condition that the terminal equipment moves; the frame selection content of the initial cutting frame and the frame selection content of the target cutting frame comprise the same target object;
the cropping module is further configured to: acquiring a target video picture based on the target cutting frame and a target acquisition picture acquired after the terminal equipment moves;
the display module is further configured to: and displaying the target video picture.
In some embodiments, the adjustment module comprises:
the detection unit is configured to acquire motion information when the terminal equipment moves;
and the adjusting unit is configured to adjust the initial cutting frame based on the motion information to obtain the target cutting frame.
In some embodiments, the motion information includes: a movement distance of the terminal device relative to the target object and/or a rotation angle of the terminal device relative to the target object:
the adjusting unit is further configured to adjust the size of the initial cutting frame based on the moving distance to obtain the target cutting frame; and/or
And adjusting the position of the initial cutting frame based on the rotation angle to obtain the target cutting frame.
In some embodiments, the clipping module is further configured to:
covering the picture outside the target cutting frame, and taking the covered picture and the picture in the target cutting frame as the target video picture;
or,
cutting the target acquisition picture based on the target cutting frame to obtain a cutting picture; and amplifying the clipping picture to obtain the target video picture.
In some embodiments, the acquisition module is further configured to:
determining the initial cutting frame based on a moving track acted on the preview picture;
or,
and carrying out target detection on the preview picture, and taking a display frame corresponding to the detected target object as the initial cutting frame.
In some embodiments, the clipping module is further configured to:
and under the condition that the target cutting frame part is positioned in the target acquisition picture, reducing the size of the target cutting frame, and obtaining the target video picture based on the reduced cutting frame.
In some embodiments, the display module is further configured to:
determining whether a sensitive object exists in the cut picture;
and under the condition that the cut picture has the sensitive object, blurring processing is carried out on the sensitive object or the picture where the sensitive object is positioned is cut.
In some embodiments, the display module is further configured to:
correcting the corrected picture, and displaying the corrected picture.
A third aspect of an embodiment of the present disclosure provides a terminal device, including a processor, a memory, and an executable program stored on the memory and capable of being executed by the processor, where the processor executes the video display method as provided in the foregoing first aspect when the executable program is executed by the processor.
A fourth aspect of the disclosed embodiments provides a computer storage medium storing an executable program; the executable program, when executed by the processor, can implement the video display method provided in the foregoing first aspect.
According to the technical scheme provided by the embodiment of the disclosure, when video acquisition is carried out, the initial cutting frame is obtained, and the video picture area required to be displayed by a user can be accurately and effectively determined; the preview picture that gathers when cutting out terminal equipment starts the camera is gone to initial cutting out the frame, can adjust the size and the display content of display picture, and the effectual user demand that satisfies lightens the privacy exposure of user that the camera viewfinder is too big and arouses, guarantees user's safety in utilization and use experience.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of embodiments of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the embodiments of the invention.
FIG. 1 is a flow chart diagram of a video display method according to an exemplary embodiment;
FIG. 2 is a flow chart diagram II of a video display method according to an exemplary embodiment;
FIG. 3 is a flow chart diagram III illustrating a video display method according to an exemplary embodiment;
Fig. 4 is a schematic diagram showing a preview screen collected by a terminal device according to an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a target acquisition screen acquired by a terminal device according to an exemplary embodiment;
FIG. 6 is a flow chart illustrating a method of video display in a video communication scenario, according to an exemplary embodiment;
fig. 7 is a schematic diagram showing a structure of a video display apparatus according to an exemplary embodiment;
fig. 8 is a block diagram of a terminal device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with embodiments of the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of embodiments of the invention.
The terminology used in the embodiments of the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the disclosure. As used in this disclosure, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of embodiments of the present disclosure. Depending on the context, the word "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination.
An embodiment of the present disclosure provides a video display method, fig. 1 is a schematic flow diagram of a video display method according to an exemplary embodiment, where the video display method is applied to a terminal device, and the following description will be made with reference to steps S101 to S103 shown in fig. 1:
step S101, acquiring an initial cutting frame when video acquisition is performed;
step S102, based on an initial cutting frame, cutting a preview picture acquired when a terminal device starts a camera;
step S103, displaying the cut picture.
Here, the terminal device proposed by the present disclosure may be an intelligent terminal, including: a mobile terminal or portable electronic device including, but not limited to, a cell phone, tablet; the portable electronic device includes, but is not limited to, a smart watch, to which the disclosed embodiments are not limited.
In an embodiment of the present disclosure, before step S101, the video display method provided in the embodiment of the present disclosure further includes:
before the initial cutting frame is acquired, a camera of the terminal device is started.
Here, the terminal device proposed by the present disclosure includes a display screen and a camera that performs image acquisition.
The camera can be divided into according to the setting position: the front camera is positioned at one side of the display screen of the terminal equipment or the rear camera is positioned at the other side of the display screen of the terminal equipment; the camera can be divided into according to functions: a tele camera, a wide-angle camera, an ultra-wide-angle camera, etc.
It should be noted that the video display method provided by the present disclosure may be applied to different video acquisition scenes. In particular, the video capture may be a stand-alone scenario, such as: the mobile phone is used for shooting and recording videos, and the displayed shooting pictures and recording pictures are pictures cut based on the initial cutting frame; the video capture scene may also be a scene of multi-device interactions, such as: the plurality of terminals perform video conference or video call, and the displayed video communication picture is a picture which is cut based on the initial cutting frame and is synchronized with each terminal.
In the case of using terminal device video communication (such as video call or video conference), the video acquisition camera of the terminal device is an ultra-wide angle lens in the front camera; correspondingly, the target object acquired by the ultra-wide angle lens in the front camera is the user of the terminal equipment.
The display screen is used for displaying pictures acquired by the camera, and the display screen can be a touch display screen; the touch display screen realizes the function corresponding to the touch operation by sensing the touch operation of a user acting on the touch display screen. The terminal device can sense the touch operation of a user using the terminal device on the touch screen, such as zooming in or zooming out of the screen or selecting a target object, and the like, and display the zoomed-in or zoomed-out preview screen on the touch screen or independently display the target object touched by the user.
According to the technical scheme provided by the embodiment of the disclosure, when video acquisition is carried out, the initial cutting frame is obtained, and the video picture area required to be displayed by a user can be accurately and effectively determined; the preview picture that gathers when cutting out terminal equipment starts the camera is gone to initial cutting out the frame, can adjust the size and the display content of display picture, and the effectual user demand that satisfies lightens the privacy exposure of user that the camera viewfinder is too big and arouses, guarantees user's safety in utilization and use experience.
In some embodiments, fig. 2 is a second flowchart of a video display method according to an exemplary embodiment, and the video display method according to the present disclosure will be described below with reference to steps S104 to S105 shown in fig. 2.
Step S104, obtaining a target cutting frame based on the initial cutting frame under the condition that the terminal equipment moves; the frame selection content of the initial cutting frame comprises the same target object;
step S105, a target video picture is obtained based on a target cutting frame and a target acquisition picture acquired after the terminal equipment moves;
the display cut screen in step S103 may be:
step S1031, a target video screen is displayed.
It should be noted that, in the case that the terminal device moves, based on the initial crop frame, the target crop frame is obtained, which may be implemented by the following examples:
in some examples, when video capturing is performed, the position before the movement of the terminal device in step S104 is the position where the terminal device first starts the camera, the preview screen generated and displayed by the camera for the first time captures the target object displays all the contents captured by the camera, the target object needs to be selected in all the contents captured by the camera, and an initial cropping frame including the target object is set in the preview screen based on the target object; the obtaining the target crop frame based on the initial crop frame in step S104 in the present disclosure may be: and adjusting the initial cutting frame to directly obtain the target cutting frame.
In other examples, the position before the movement of the terminal device in step S104 is the position after the terminal device has undergone at least one movement when the video acquisition is performed; for the transition picture acquired after the terminal equipment undergoes at least one movement, the transition picture needs to be adjusted based on the initial cutting frame to obtain a transition cutting frame, and the transition picture is cut and displayed based on the transition cutting frame; based on this, the obtaining the target crop frame based on the initial crop frame in step S104 of the present disclosure may be: and adjusting the initial cutting frame to indirectly obtain the target cutting frame. Specific: and based on the transitional cutting frame obtained by adjusting the initial cutting frame, adjusting the transitional cutting frame to obtain the target cutting frame.
It should be noted that, the movement of the terminal device may be: the user touches the terminal equipment to turn over, shift and the like, so that the terminal equipment moves.
In the embodiment of the disclosure, after the terminal device moves, the view finding range of the camera of the terminal device is changed, the state of the target object in the collected image collected in the view finding range is correspondingly changed, and in order to adapt to the change of the state of the target object, the initial cutting frame is adjusted to obtain the target cutting frame, so that the target cutting frame contains the target object in the initial cutting frame.
Here, the state of the target object may be the size of the target object, the position of the target object, the posture of the target object, or the like, which is not limited by the present disclosure.
In an exemplary embodiment, in a view finding range of a camera of the terminal device before movement, a target object is located at the midpoint, and in a preview picture acquired by the terminal device, the target object is also located at the midpoint; after the terminal equipment rotates by a preset angle, in a view finding range after the camera of the terminal equipment moves, a target object is positioned at the edge, and the target object in a target acquisition picture acquired by the terminal equipment is also positioned at the edge; therefore, in order to adapt to the position change of the target object, the position of the initial cutting frame is adjusted to obtain the target cutting frame.
Here, the shapes of the target and initial crop frames may be regular shape crop frames, or may be formed according to the outline of the object within the crop frame, which is not limited in the present disclosure.
According to the scheme provided by the embodiment of the disclosure, when the initial cutting frame is acquired, the video picture area required to be displayed by a user can be accurately and effectively determined; and under the condition that the terminal equipment moves, the initial cutting frame is adjusted to be a target cutting frame, and based on the target cutting frame and the frames acquired after the terminal equipment moves, the displayed target video frame is obtained, so that the size and the display content of the display frame can be adjusted, the user requirement can be effectively met, the situation that the user bumps the terminal equipment by mistake can be reduced, the privacy exposure of the user caused by the change of the camera view finding range of the terminal equipment is caused, and the use safety and the use experience of the user are ensured.
In some embodiments, referring to fig. 3, fig. 3 is a flow chart diagram three of a video display method according to an exemplary embodiment. In step S104, based on the initial crop frame, the target crop frame is obtained, which may be implemented by step S1041 and step S1042 in fig. 3:
step S1041, obtaining motion information of the terminal equipment during motion;
step S1042, based on the motion information, the initial cutting frame is adjusted to obtain the target cutting frame.
In the embodiment of the disclosure, the motion information when the terminal equipment moves may be motion displacement of the terminal equipment, rotation angle of the terminal equipment, motion rate of the terminal equipment, and motion direction of the terminal equipment; but may also be a moving distance of the terminal device with respect to the target object, a rotation angle of the terminal device with respect to the target object, or the like, to which the present disclosure is not limited.
Specifically, referring to fig. 4, fig. 4 is a schematic diagram illustrating a preview screen collected by a terminal device according to an exemplary embodiment; fig. 5 is a schematic diagram of a target acquisition screen acquired by a terminal device according to an exemplary embodiment. Fig. 4 shows a preview image a acquired by a camera of the terminal device before movement and an initial crop frame X1 selected by a user; fig. 5 shows a target acquisition picture B acquired by the camera after the movement of the terminal device and an adjusted target crop frame X2.
Referring to fig. 4 and 5, the initial crop box X1 includes screen content selected by the user in the preview screen a, where the screen content includes a target object; along with the movement of the terminal equipment, the view finding position and the view finding range of the camera of the terminal equipment are changed, and the state of a target object in an acquired picture acquired under different view finding ranges is changed; for example, a change in the position of the target object, or a change in the size of the target object, etc. And according to the motion information of the terminal equipment, obtaining the state change of the target object from the preview picture A to the target acquisition picture B, and correspondingly adjusting the initial cutting frame X1 to be a target cutting frame X2 based on the state change, so that the target object is contained in the target cutting frame X2, and the target object can be displayed in the final target video picture.
According to the method and the device, the motion information of the terminal equipment in motion is acquired, the initial cutting frame is adjusted based on the motion information to obtain the target cutting frame, so that the acquired target acquisition picture can be trimmed based on the target cutting frame under the condition that the terminal equipment moves, the content containing a target object is effectively displayed, and the privacy exposure condition caused by the change of the view finding range is reduced to the greatest extent.
In some embodiments, the motion information includes: a moving distance of the terminal device relative to the target object and/or a rotating angle of the terminal device relative to the target object; the adjustment of the initial cutting frame in step S1042 to obtain the target cutting frame may be implemented by the following specific steps:
based on the moving distance, adjusting the size of the initial cutting frame to obtain a target cutting frame; and/or
And based on the rotation angle, adjusting the position of the initial cutting frame to obtain the target cutting frame.
In the present disclosure, the terminal device can obtain, by detection and calculation, a movement distance of the terminal device with respect to the movement of the target object.
In some examples, the terminal device includes a distance sensor, and the terminal device is capable of detecting a distance between the terminal device and the target object by the distance sensor before and after the movement, respectively, and calculating a moving distance of the terminal device relative to the movement of the target object.
Here, the distance sensor may be an infrared ranging sensor, an ultrasonic detection sensor, or a millimeter radar wave sensor, which is not limited in this disclosure.
In other examples, the terminal device can also obtain the distance between the target object and the terminal device through the depth of field in the acquired picture, and further obtain the moving distance of the terminal device relative to the target object through calculation.
In the embodiment of the disclosure, when the distance between the camera of the terminal device and the target object is changed, that is, when the terminal device generates a moving distance relative to the target object, the target object is correspondingly enlarged or reduced in the image acquired by the camera; that is, the size of the target object in the target acquisition picture acquired by the camera of the terminal device after the movement is correspondingly enlarged or reduced compared with the size of the target object in the preview picture acquired by the camera of the terminal device before the movement; therefore, in order to adapt to the size change of the target object, the size of the initial cutting frame is also enlarged and reduced by the same multiple so as to obtain the target cutting frame.
Specifically, the moving distance of the terminal device can determine the magnification or reduction of the target object in the target acquisition picture relative to the target object in the preview picture, and then the size of the initial cutting frame in the terminal device is enlarged or reduced by a corresponding multiple to obtain the target cutting frame.
The terminal device provided by the disclosure comprises an acceleration sensor and a gyroscope; the acceleration sensor and the gyroscope are used for detecting the rotation angle of the equipment after the terminal equipment moves. Thereby, the rotation angle of the terminal device toward the target object is obtained.
In the embodiment of the disclosure, the moving distance and the rotating angle after the movement of the terminal equipment are detected can be further realized by the following modes: the terminal device can also comprise a compass sensor, and the compass sensor can obtain the displacement of the terminal device and the rotation angle of the device by detecting the coordinate change of the mobile phone relative to a ground coordinate system.
In the embodiment of the disclosure, when the rotation angle of the camera of the terminal device and the oriented target object changes, the position of the target object in the picture acquired by the camera changes. For example, when the terminal device rotates 15 degrees in the first direction with the frame facing the ground as an axis, the view finding range of the camera rotates 15 degrees in the first direction, and in the target acquisition picture acquired by the camera, the initial cropping frame containing the target object rotates 15 degrees in the second direction in the target acquisition picture, so as to obtain the target cropping frame. Here, the first direction and the second direction are opposite directions.
In the embodiment of the disclosure, if the angle and the distance of the terminal device facing the target object are changed, the position and the size of the initial cutting frame can be adjusted based on the moving distance and the rotating angle, so as to obtain the target cutting frame.
According to the embodiment of the invention, when the angle of the terminal equipment changes or the distance movement occurs, the cutting frame is adjusted based on the moving distance and the rotating angle of the terminal equipment relative to the target object, so that the change of the view finding range of the camera is effectively adapted, and the content containing the target object is displayed; in practical application, privacy exposure caused by small contact of a user with terminal equipment is reduced to the greatest extent.
In some embodiments, the target video frame is obtained based on the target crop frame and the target acquisition frame acquired after the movement of the terminal device in step S105, which may be implemented by the following specific steps:
covering the picture outside the target cutting frame, and taking the covered picture and the picture in the target cutting frame as target video pictures;
or,
cutting the target acquisition picture based on the target cutting frame to obtain a cutting picture; and amplifying the clipping picture to obtain a target video picture.
Referring to fig. 5, based on the target crop frame X2, the target acquisition picture B is divided into two areas B1 and B2, and the area B2 is the picture content selected in the frame of the target crop frame X2, and contains the target object; b1 area is the picture content outside the target cutting frame X2; and (3) performing covering processing on the picture in the B1 area, and taking the covered picture in the B1 area and the covered picture in the B2 area as target video pictures.
Here, the masking process in the present disclosure is a screen that does not display the B1 region, and in some examples, the masking process may be to adjust the luminance value of the screen of the B1 region to 0; in other examples, the masking process may also be to fill the B1 area picture with a single color, such as filling the B1 area picture with black; in other examples, the masking process may also be to overlay a new layer on the picture of the B1 region.
The frame outside the target cutting frame is covered, so that the content selected and displayed by the user can be displayed in a targeted manner; in the video conference or video call scene, the exposure of redundant picture content caused by the change of the view finding range of the terminal equipment due to the fact that the user does not touch the terminal equipment is reduced, and the safety and privacy of the user are ensured.
Referring to fig. 5, the target acquisition frame B is cut based on the target cutting frame X2 to obtain a cut frame, where the cut frame is a frame content B2 selected in the frame of the target cutting frame X2; and amplifying the clipping picture to obtain a target video picture.
Here, the enlarging process of the clipping picture to obtain the target video picture may be implemented in the following specific manner: in some examples, the crop screen is scaled up; the size of the picture after the equal proportion amplification cannot exceed the size of a video display area on the display screen; in other examples, the cropped picture is stretched or tiled such that the processed picture fills the video display area on the display screen.
The target video picture is obtained by amplifying the clipping picture, so that the aesthetic property of the video display picture can be ensured, and the use experience of a user is improved.
In some embodiments, the target video frame is obtained based on the target crop frame and the target acquisition frame after the movement of the terminal device in step S105, which may also be implemented by the following specific steps:
and under the condition that the target cutting frame part is positioned in the target acquisition picture, reducing the size of the target cutting frame, and obtaining a target video picture based on the reduced cutting frame.
In the present disclosure, when a terminal device moves, a view finding range of a camera of the terminal device changes; when the target object part is displayed in the view-finding range after the camera moves, namely the target object part is displayed in the target acquisition picture, the target cutting frame part is positioned in the target acquisition picture, and at the moment, the size of the target cutting frame is reduced, so that the reduced target cutting frame contains the target object which is partially displayed.
According to the embodiment of the disclosure, the situation that the terminal equipment deviates from the target object for image acquisition can be adapted, privacy picture exposure is reduced, and the use experience of a user is ensured.
In some embodiments, the obtaining the initial crop box in step S101 may be implemented by the following specific steps:
Determining an initial crop box based on a moving track acted on the preview screen;
or,
and carrying out target detection on the preview picture, and taking a display frame corresponding to the detected target object as an initial cutting frame.
In the embodiment of the disclosure, a user of the terminal device can perform touch operation on the touch display screen. Here, based on the screen content of the preview screen, the user generates a movement track of the touch on the touch display screen to determine an initial crop box. Specifically, the user may select the target object, and form a crop box on the display screen through a closed moving track including the target object, for example, in a preview screen on the touch display screen, a square crop box is circled around the target object.
The initial cutting frame is determined through the moving track acted on the preview picture, so that the picture display requirement of a user can be reasonably met, and the use experience of the user is ensured.
In the embodiment of the disclosure, target detection is performed on a preview screen to obtain a plurality of acquisition objects and view frames corresponding to the plurality of acquisition objects, and the view frames corresponding to the target objects in the plurality of acquisition objects are used as initial cropping frames.
Here, the initial trimming frame may be a view frame corresponding to a target object among the plurality of acquisition objects, the acquisition object having the largest size among the plurality of acquisition objects is determined as the target object, and the view frame of the target object is taken as the initial trimming frame; the method may further include determining a target object from the plurality of collection objects by a touch operation performed on the touch display screen by a user, and using a viewfinder of the target object as the initial crop frame.
By detecting the target of the preview picture, the time for selecting the cutting frame by the user is saved, and the acquisition of the initial cutting frame can be more flexible and intelligent.
In some embodiments, displaying the cropped picture in step S103 may be achieved by the following specific steps:
determining whether a sensitive object exists in the cut picture;
and under the condition that the cut picture has the sensitive object, blurring processing is carried out on the sensitive object or the picture where the sensitive object is positioned is cut.
Here, the cut picture may be a picture obtained by cutting a preview picture acquired before the movement of the terminal device according to the initial cutting frame, or may be a target video picture obtained by cutting a target acquisition picture acquired after the movement of the terminal device based on the target cutting frame.
It should be noted that the sensitive object is a different object from the target object, and the user does not want to show.
It should be noted that, taking the example that the cut picture is the target video picture, the initial cutting frame in the preview picture includes the target object and the initial background; in the target acquisition picture, a target object and a target background are included in a target cutting frame. Here, the target video frame obtained by clipping the target acquisition frame according to the target clipping frame also includes the target object and the target background. Therefore, determining whether the sensitive object exists in the target video frame may be to perform target detection on the target background, and determine whether the sensitive object exists in the target background.
In the embodiment of the present disclosure, when a sensitive object exists in a target video frame, blurring processing may be performed on the sensitive object, which may be performed as blurring processing or masking processing, where a manner of masking processing is described above in the embodiment of the present disclosure, and details are not repeated here; the picture where the sensitive object is cut may be to replace the target background containing the sensitive object with the initial background.
By detecting and processing the sensitive objects in the picture, the exposure of the sensitive objects is reduced, and the privacy of the user is effectively protected.
In some embodiments, displaying the cropped picture in step S103 may also be implemented by the following specific steps:
correcting the corrected picture and displaying the corrected picture.
Here, the cut picture may be a picture obtained by cutting a preview picture acquired before the movement of the terminal device according to the initial cutting frame, or may be a target video picture obtained by cutting a target acquisition picture acquired after the movement of the terminal device based on the target cutting frame.
Taking the example that the cut picture is a target video picture, because of the movement of the terminal equipment, the view finding range of the camera is changed, and the picture edge collected by the camera and the object in the picture may have distortion to a certain extent, the present disclosure can detect the target video picture, if the target video picture is distorted, correct the deviation of the picture edge of the target video picture and the object in the picture, and display the corrected picture. Through correction processing, the aesthetic property of picture display is ensured.
FIG. 6 is a flow chart illustrating a method of video display in a video communication scenario, according to an exemplary embodiment; in fig. 6, a first user using a first terminal device and a second user using a second terminal device are in video communication. The video communication may be a video call or a video conference, and the present disclosure is not limited. The server can directly or indirectly connect the first terminal device and the second terminal device in a wired or wireless mode, and the server can be a single server, a cloud server or a server cluster. Here, the first terminal device and the second terminal device have built-in processing modules, so that stand-alone video acquisition and video clipping can be completed. Specifically, referring to fig. 6, the video display method in the video communication scene provided by the embodiment of the present disclosure may be implemented by the following steps:
step 601, starting a front ultra-wide angle camera;
step 602, collecting and previewing and displaying a first preview picture based on a front ultra-wide angle camera;
step 603, acquiring a first initial cropping frame in a first preview screen, wherein the first initial cropping frame contains a first user using a first terminal device;
Step 604, obtaining a first display video picture based on the first initial cropping frame and the first preview picture, and sending the first display video picture to the server;
here, the first display video picture is a cut-out picture proposed by the above embodiments of the present disclosure;
step 605, starting a front ultra-wide angle camera;
step 606, collecting and previewing a second preview picture based on the front ultra-wide angle camera;
step 607, obtaining a second initial cropping frame in the second preview screen, wherein the second initial cropping frame includes a second user using a second terminal device;
step 608, obtaining a second display video frame based on the second initial cropping frame and the second preview frame, and sending the second video display frame to the server;
here, the second display video picture is a cut-out picture proposed by the above embodiment of the present disclosure;
step 609, splicing the first display video picture and the second display video picture to obtain a first call picture;
step 610, sending the first call frame to the first terminal device;
step 611, sending the first call frame to the second terminal device;
step 612, displaying a first call screen;
Step 613, displaying a first call screen;
step 614, obtaining motion information of the motion of the first terminal device; under the condition that the terminal equipment moves, a target cutting frame is obtained based on the initial cutting frame; acquiring a target video picture based on a target cutting frame and a target acquisition picture acquired after the movement of the terminal equipment, and sending the target video picture to a server;
step 615, splicing the target video picture sent by the first terminal device and the second video picture sent by the second terminal device to obtain a second communication picture;
step 616, sending the second session picture to the first terminal device;
step 617, transmitting the second communication picture to the second terminal device;
step 618, displaying the second conversation picture;
step 619, displaying a second call screen.
Here, the terminal device may be the first terminal device and the second terminal device set forth above in the present disclosure.
Embodiments of the present disclosure provide a video display apparatus. In some embodiments, referring to fig. 7, fig. 7 is a schematic structural diagram of a video display apparatus according to an exemplary embodiment, where the video display apparatus is applied to a terminal device, and includes:
An acquisition module 701 configured to acquire an initial crop box at the time of video acquisition;
the clipping module 702 is configured to clip the preview picture acquired when the terminal device starts the camera based on the initial clipping frame;
and a display module 703 configured to display the cropped screen.
In some embodiments, the apparatus further comprises:
an adjustment module 704, configured to obtain a target crop frame based on the initial crop frame in the case of movement of the terminal device; the frame selection content of the initial cutting frame and the frame selection content of the target cutting frame comprise the same target object;
clipping module 702 is also configured to: acquiring a target video picture based on a target cutting frame and a target acquisition picture acquired after the terminal equipment moves;
the display module 703 is further configured to: and displaying the target video picture.
In some embodiments, the adjustment module 704 includes:
a detection unit 7041 configured to acquire movement information when the terminal device moves;
the adjusting unit 7042 is configured to adjust the initial crop frame based on the motion information to obtain the target crop frame.
In some embodiments, the motion information includes: a movement distance of the terminal device relative to the target object and/or a rotation angle of the terminal device relative to the target object:
The adjusting unit 7042 is further configured to adjust the size of the initial cutting frame based on the moving distance to obtain a target cutting frame; and/or
And based on the rotation angle, adjusting the position of the initial cutting frame to obtain the target cutting frame.
In some embodiments, clipping module 702 is further configured to:
covering the picture outside the target cutting frame, and taking the covered picture and the picture in the target cutting frame as target video pictures;
or,
cutting the target acquisition picture based on the target cutting frame to obtain a cutting picture; and amplifying the clipping picture to obtain a target video picture.
In some embodiments, the acquisition module 701 is further configured to:
determining an initial crop box based on a moving track acted on the preview screen;
or,
and carrying out target detection on the preview picture, and taking a display frame corresponding to the detected target object as an initial cutting frame.
In some embodiments, clipping module 702 is further configured to:
and under the condition that the target cutting frame part is positioned in the target acquisition picture, reducing the size of the target cutting frame, and obtaining a target video picture based on the reduced cutting frame.
In some embodiments, the display module 703 is further configured to:
Determining whether a sensitive object exists in the cut picture;
and under the condition that the cut picture has the sensitive object, blurring processing is carried out on the sensitive object or the picture where the sensitive object is positioned is cut.
In some embodiments, the display module 703 is further configured to:
correcting the corrected picture and displaying the corrected picture.
The embodiment of the disclosure provides a terminal device, which comprises:
a memory for storing processor-executable instructions;
the processor is connected with the memories respectively;
wherein the processor is configured to execute the video display method provided in any of the foregoing technical solutions.
The processor may include various types of storage medium, which are non-transitory computer storage media, capable of continuing to memorize information stored thereon after a terminal device is powered down.
The processor may be connected to the memory through a bus or the like, for reading an executable program stored on the memory, for example, at least one of the video display methods proposed in the above embodiments of the present disclosure.
Fig. 8 is a block diagram of a terminal device according to an exemplary embodiment. For example, the terminal device 800 may be a mobile phone, computer, digital broadcast user device, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 8, a terminal device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the terminal device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to generate all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the terminal device 800. Examples of such data include instructions for any application or method operating on terminal device 800, contact data, phonebook data, messages, pictures, video, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the terminal device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal device 800.
The multimedia component 808 includes a screen between the terminal device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. Here, the front camera includes an ultra wide angle camera; the front camera and/or the rear camera may receive external multimedia data when the terminal device 800 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the terminal device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the terminal device 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the terminal device 800, the sensor assembly 814 may also detect a change in position of the terminal device 800 or a component of the terminal device 800, the presence or absence of a user's contact with the terminal device 800, an orientation or acceleration/deceleration of the terminal device 800, and a change in temperature of the terminal device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the terminal device 800 and other devices, either wired or wireless. The terminal device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal device 800 can be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of terminal device 800 to generate the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (12)

  1. A video display method, wherein the method is applied to a terminal device, the method comprising:
    acquiring an initial cutting frame when video acquisition is performed;
    based on the initial cutting frame, cutting a preview picture acquired when the terminal equipment starts a camera;
    and displaying the cut picture.
  2. The method of claim 1, wherein the method further comprises:
    Under the condition that the terminal equipment moves, a target cutting frame is obtained based on the initial cutting frame; the frame selection content of the initial cutting frame and the frame selection content of the target cutting frame comprise the same target object;
    acquiring a target video picture based on the target cutting frame and a target acquisition picture acquired after the terminal equipment moves;
    the displaying the cut picture comprises the following steps:
    and displaying the target video picture.
  3. The video display method according to claim 2, wherein the obtaining a target crop frame based on the initial crop frame includes:
    acquiring motion information of the terminal equipment during motion;
    and adjusting the initial cutting frame based on the motion information to obtain the target cutting frame.
  4. A video display method according to claim 3, wherein the motion information comprises: a movement distance of the terminal device relative to the target object and/or a rotation angle of the terminal device relative to the target object;
    the adjusting the initial cutting frame based on the motion information to obtain the target cutting frame includes:
    based on the moving distance, adjusting the size of the initial cutting frame to obtain the target cutting frame; and/or
    And adjusting the position of the initial cutting frame based on the rotation angle to obtain the target cutting frame.
  5. The video display method according to any one of claims 2 to 4, wherein the obtaining a target video frame based on the target crop frame and a target acquisition frame acquired after the movement of the terminal device includes:
    covering the picture outside the target cutting frame, and taking the covered picture and the picture in the target cutting frame as the target video picture;
    or,
    cutting the target acquisition picture based on the target cutting frame to obtain a cutting picture; and amplifying the clipping picture to obtain the target video picture.
  6. The video display method of any one of claims 1 to 4, wherein the acquiring an initial crop box comprises:
    determining the initial cutting frame based on a moving track acted on the preview picture;
    or,
    and carrying out target detection on the preview picture, and taking a display frame corresponding to the detected target object as the initial cutting frame.
  7. The video display method according to any one of claims 2 to 4, wherein the obtaining a target video frame based on the target crop frame and a target acquisition frame acquired by the terminal device after the movement includes:
    And under the condition that the target cutting frame part is positioned in the target acquisition picture, reducing the size of the target cutting frame, and obtaining the target video picture based on the reduced cutting frame.
  8. The video display method according to any one of claims 1 to 4, wherein the displaying the cropped picture includes:
    determining whether a sensitive object exists in the cut picture;
    and under the condition that the cut picture has the sensitive object, blurring processing is carried out on the sensitive object or the picture where the sensitive object is positioned is cut.
  9. The video display method according to any one of claims 1 to 4, wherein the displaying the cropped picture includes:
    correcting the corrected picture, and displaying the corrected picture.
  10. A video display apparatus, wherein the apparatus is applied to a terminal device, the apparatus comprising:
    the acquisition module is configured to acquire an initial cutting frame when video acquisition is performed;
    the clipping module is configured to clip the preview picture acquired when the terminal equipment starts the camera based on the initial clipping frame;
    and the display module is configured to display the cut picture.
  11. A terminal device comprising a processor, a memory and an executable program stored on the memory and executable by the processor, wherein the processor performs the method as provided in any one of claims 1 to 9 when the executable program is run by the processor.
  12. A computer storage medium, wherein the computer storage medium stores an executable program; the executable program, when executed by a processor, is capable of implementing the method as provided in any one of claims 1 to 9.
CN202280004304.8A 2022-05-25 2022-05-25 Video display method and device, terminal equipment and computer storage medium Pending CN117480772A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/095020 WO2023225910A1 (en) 2022-05-25 2022-05-25 Video display method and apparatus, terminal device, and computer storage medium

Publications (1)

Publication Number Publication Date
CN117480772A true CN117480772A (en) 2024-01-30

Family

ID=88918022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280004304.8A Pending CN117480772A (en) 2022-05-25 2022-05-25 Video display method and device, terminal equipment and computer storage medium

Country Status (2)

Country Link
CN (1) CN117480772A (en)
WO (1) WO2023225910A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009171428A (en) * 2008-01-18 2009-07-30 Nec Corp Control method and program for digital camera apparatus and electronic zoom
CN104731494B (en) * 2013-12-23 2019-05-31 中兴通讯股份有限公司 A kind of method and apparatus of preview interface selection area amplification
CN106358069A (en) * 2016-10-31 2017-01-25 维沃移动通信有限公司 Video data processing method and mobile terminal
KR20190048291A (en) * 2017-10-31 2019-05-09 삼성에스디에스 주식회사 System and method for video conference using image clipping
CN109905593B (en) * 2018-11-06 2021-10-15 华为技术有限公司 Image processing method and device
CN113014793A (en) * 2019-12-19 2021-06-22 华为技术有限公司 Video processing method and electronic equipment
CN112347849B (en) * 2020-09-29 2024-03-26 咪咕视讯科技有限公司 Video conference processing method, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2023225910A1 (en) 2023-11-30

Similar Documents

Publication Publication Date Title
KR102194094B1 (en) Synthesis method, apparatus, program and recording medium of virtual and real objects
CN108419016B (en) Shooting method and device and terminal
CN105159640B (en) Display interface rotating method and device and mobile terminal
US8817160B2 (en) Mobile terminal and method of controlling the same
JP6348611B2 (en) Automatic focusing method, apparatus, program and recording medium
CN105282441B (en) Photographing method and device
US11539888B2 (en) Method and apparatus for processing video data
CN114009003A (en) Image acquisition method, device, equipment and storage medium
WO2018053722A1 (en) Panoramic photo capture method and device
CN113364965A (en) Shooting method and device based on multiple cameras and electronic equipment
CN111614910B (en) File generation method and device, electronic equipment and storage medium
CN112188096A (en) Photographing method and device, terminal and storage medium
CN114422687B (en) Preview image switching method and device, electronic equipment and storage medium
CN114430457B (en) Shooting method, shooting device, electronic equipment and storage medium
CN114666490B (en) Focusing method, focusing device, electronic equipment and storage medium
CN114612485A (en) Image clipping method and device and storage medium
EP3905660A1 (en) Method and device for shooting image, and storage medium
CN113315903B (en) Image acquisition method and device, electronic equipment and storage medium
CN117480772A (en) Video display method and device, terminal equipment and computer storage medium
CN110874829B (en) Image processing method and device, electronic device and storage medium
CN114418865A (en) Image processing method, device, equipment and storage medium
CN113099113A (en) Electronic terminal, photographing method and device and storage medium
CN111356001A (en) Video display area acquisition method and video picture display method and device
CN112346606A (en) Picture processing method and device and storage medium
CN110876013B (en) Method and device for determining image resolution, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination