CN113723397A - Screen capturing method and electronic equipment - Google Patents

Screen capturing method and electronic equipment Download PDF

Info

Publication number
CN113723397A
CN113723397A CN202010455200.1A CN202010455200A CN113723397A CN 113723397 A CN113723397 A CN 113723397A CN 202010455200 A CN202010455200 A CN 202010455200A CN 113723397 A CN113723397 A CN 113723397A
Authority
CN
China
Prior art keywords
electronic device
picture
screen
area
screenshot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010455200.1A
Other languages
Chinese (zh)
Other versions
CN113723397B (en
Inventor
熊刘冬
郭乃荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010455200.1A priority Critical patent/CN113723397B/en
Priority to PCT/CN2021/094619 priority patent/WO2021238740A1/en
Publication of CN113723397A publication Critical patent/CN113723397A/en
Application granted granted Critical
Publication of CN113723397B publication Critical patent/CN113723397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A screen capture method and an electronic device are provided. In the method, in response to the screen capturing operation of a user, the electronic equipment performs multiple screen capturing on a video within a time period, intelligently detects a target to be processed in the multiple screen capturing and processes the target to be processed into a transparent area, and superimposes the processed screen capturing pictures to obtain a final screen capturing picture. By implementing the technical scheme provided by the application, the blocking of the person on the background can be avoided when the video is captured, so that more information can be displayed by the captured picture.

Description

Screen capturing method and electronic equipment
Technical Field
The present application relates to the field of terminals and communication technologies, and in particular, to a screen capture method and an electronic device.
Background
With the development of network technology, more and more users begin to learn through web lessons. Therefore, the fragmentized time can be effectively utilized, the video of the online class can be repeatedly watched, and the user can perform intensive training aiming at the weak items learned by the user.
During the process of watching the web lesson video, important contents can be stored in a screen capture mode so as to facilitate subsequent quick review and note taking. However, in the current web lesson video, the teacher usually hides a part of the blackboard or the display screen during lecture. Part of contents in the blackboard on the picture which is shot can be shielded by the teacher, so that less information is displayed by the shot picture.
Disclosure of Invention
The application provides a screen capture method and electronic equipment, which can avoid the shielding of people on a background when a video is captured, so that a captured picture can display more information.
In a first aspect, the present application provides a screen capture method, including: in response to a screen capturing operation of a user, the electronic equipment performs N times of screen capturing on a video in a first time period to obtain N first screen capturing pictures, wherein N is a positive integer not less than 2; the electronic equipment carries out target detection on the first screenshot picture, and determines the area of a target to be processed in the first screenshot picture, wherein the target to be processed is a target of the target detection; the electronic equipment processes the region where the target to be processed is located in the first screenshot picture into a transparent region to obtain M second screenshot pictures, wherein M is a positive integer less than or equal to N; the electronic equipment superposes the M second screen capture pictures to obtain a third screen capture picture; when the M is equal to 2, displaying information of a non-transparent area of the second screenshot picture superposed on the uppermost layer and information of a first area corresponding to the transparent area of the second screenshot picture superposed on the uppermost layer in the third screenshot picture; the first area is an area corresponding to the transparent area of the uppermost second screenshot picture at the position on the next second screenshot picture; the information of the first area comprises the information of the non-transparent area in the first area; when M is larger than 2, the information of the first area also comprises the information of a second area corresponding to the transparent area of the first area; the second area is an area corresponding to the transparent area of the first area in the position on the second screenshot picture of the next layer; the information of the second area comprises information of a non-transparent area of the second area; and the like until no transparent area exists in the third screenshot picture, or the second screenshot picture of the next layer is the second screenshot picture superposed on the lowest layer.
In the above embodiment, after the user performs the screen capturing operation, the electronic device can perform multiple screen capturing operations on the video. And carrying out target detection on each first screenshot picture obtained by multiple screenshots, detecting the region where the figure serving as the target to be processed is located, processing the region into a transparent region, and obtaining a plurality of second screenshot pictures. Since the target to be processed moves during multiple screenshots, the position of the target to be processed is different relative to the background (such as a blackboard) in the video, and the information in the background which is shielded and exposed by the target to be processed is also different. The electronic equipment superposes a plurality of second screen shots containing different information. After the pictures are overlapped, the transparent area of the upper layer picture can display the information of the corresponding area in the lower layer picture. Therefore, the finally superimposed third screenshot may display, in addition to the information superimposed on the uppermost second screenshot, information superimposed on a corresponding area on the lower second screenshot displayed in the transparent area of the uppermost second screenshot. It is understood that if the corresponding area on the underlying second screenshot is a transparent area, the transparent area can display information of the corresponding area on the underlying second screenshot. Therefore, more information can be displayed on the third screen shots than the second screen shots and the first screen shots. The blocking of the background by people during video screen capture is avoided, so that more information is displayed on the screen capture picture.
With reference to the first aspect, in some embodiments, in response to a screen capture operation of a user, the electronic device performs N screen captures on a video within a first time period to obtain N first screen capture pictures, which specifically includes: in response to a screen capture operation of a user, the electronic equipment determines a starting time of a first time period and a stopping time of the first time period; the electronic equipment performs N screen shots on the video within the first time period to obtain N first screen shots.
In the embodiment of the application, the electronic device can determine the starting time and the stopping time of the first time period according to the screen capturing operation of the user, and the user controls the screen capturing starting and stopping time, so that the finally obtained third screen capturing picture can better meet the user expectation.
Optionally, determining the start-stop time of the first time period may be based on different user operations:
in combination with some embodiments of the first aspect, in some embodiments, the screen capture operation includes a first operation and a second operation that are not completed consecutively; the electronic device determines the starting time and the stopping time of the first time period in response to the screen capturing operation of the user, and specifically includes: in response to the first operation of the user, the electronic equipment determines a starting moment of the first time period; in response to the second operation by the user, the electronic device determines a stop time of the first time period.
In the embodiment of the application, the start-stop time is determined by different discontinuous operations, so that the determination process of the first time period is simpler.
In combination with some embodiments of the first aspect, in some embodiments, the screen capture operation comprises a series of operations performed in succession; the electronic device determines the starting time and the stopping time of the first time period in response to the screen capturing operation of the user, and specifically includes: responding to the operation of triggering screen capture by a user, and displaying a time period selection control by the electronic equipment; in response to a user operation on the time period selection control, the electronic device determines a start time and a stop time of the first time period.
In the embodiment of the application, the start-stop time is determined from the time selection control by the user through a series of continuous operations, so that the determination of the first time period is more accurate.
Optionally, there are many ways for the electronic device to capture the video N times in the first time period:
with reference to some embodiments of the first aspect, in some embodiments, the electronic device performs N screen shots on the video within the first time period to obtain N first screen shots, which specifically includes: the electronic equipment performs N screen shots on the video within the first time period according to a preset time period to obtain N first screen shots.
In the embodiment of the application, the electronic equipment performs screen capturing within the first time period according to the preset time period, so that the time interval between different screen capturing pictures is ensured, and the probability of obtaining more information in the third screen capturing picture is improved.
With reference to some embodiments of the first aspect, in some embodiments, the electronic device performs N screen shots on the video within the first time period to obtain N first screen shots, which specifically includes: the electronic equipment performs target detection on the video within the first time period; and when the position change of the target to be processed in the video is determined to exceed the preset distance threshold, performing screen capture once to generate a first screen capture picture until the N first screen capture pictures are obtained.
In the embodiment of the application, the electronic device performs screen capturing when the moving position of the target to be processed exceeds the preset distance threshold according to the target detection result, so that more information in the background can be exposed in the obtained first screen capturing picture, and more information can be displayed in the third screen capturing picture.
In combination with some embodiments of the first aspect, in some embodiments, the method further comprises: and the electronic equipment deletes the first screenshot picture which does not meet the picture quality requirement for target detection in the N first screenshot pictures.
Optionally, the order of superimposing the second screenshot picture may be obtained in multiple ways:
with reference to some embodiments of the first aspect, in some embodiments, the electronic device superimposes the M second screen capture pictures to obtain a third screen capture picture, which specifically includes: and the electronic equipment superposes the M second screen capture pictures according to a preset default sequence to obtain a third screen capture picture.
With reference to some embodiments of the first aspect, in some embodiments, the preset default order is an order in which temporally subsequent pictures are superimposed in the video.
In the above embodiment, the second screenshot picture is superimposed according to the preset default order, so that the superimposition processing process is faster. And superposing the pictures with the later playing time in the video according to a preset default sequence, so that the finally obtained information displayed in the third screen-cut picture is the latest information.
With reference to some embodiments of the first aspect, in some embodiments, the electronic device superimposes the M second screen capture pictures to obtain a third screen capture picture, which specifically includes: the electronic equipment displays the superposition sequence of the M second screen capture pictures;
and in response to the operation of adjusting the stacking sequence by the user, the electronic equipment stacks the M second screen capture pictures according to the stacking sequence adjusted by the user to obtain a third screen capture picture.
In the embodiment of the application, the stacking sequence can be automatically adjusted by a user, so that the third screen-cut picture obtained by stacking better meets the requirements of the user.
In combination with some embodiments of the first aspect, in some embodiments, the method further comprises: the electronic equipment fills the transparent area in the third cut-screen picture with the background color in the third cut-screen picture.
In the embodiment of the application, if the third screenshot picture also has a transparent region, the transparent region can be filled with a background color, so that the attractiveness of the display of the third screenshot picture is improved.
In combination with some embodiments of the first aspect, in some embodiments, the method further comprises: and the electronic equipment fills and completes the transparent area in the third screenshot picture by using an intelligent repairing technology.
In the embodiment of the application, if a transparent region is also arranged in the third screenshot picture, the transparent region in the third screenshot picture can be filled and completed by using an intelligent repairing technology, so that the attractiveness of the display of the third screenshot picture is improved.
In combination with some embodiments of the first aspect, in some embodiments, the method further comprises: the electronic equipment performs semitransparent processing on an area where a target to be processed is located in a first screenshot picture corresponding to a second screenshot picture superposed on the uppermost layer to obtain a semitransparent target to be processed; and the electronic equipment superposes the semitransparent target to be processed on the third screen capture picture to obtain a fourth screen capture picture.
In the embodiment of the application, the semitransparent target to be processed is superposed on the third screen shot, so that more information in the background can be obtained, and the action and the posture of the target to be processed can be seen. The amount of information obtained is further improved.
In combination with some embodiments of the first aspect, in some embodiments, the video is an instructional video and the object to be processed is a teacher.
In a second aspect, an embodiment of the present application provides an electronic device, including: one or more processors and memory; the memory coupled with the one or more processors, the memory to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform: responding to a screen capturing operation of a user, and performing N times of screen capturing on a video in a first time period to obtain N first screen capturing pictures, wherein N is a positive integer not less than 2; performing target detection on the first screenshot picture, and determining an area where a target to be processed in the first screenshot picture is located, wherein the target to be processed is a target of the target detection; the electronic equipment processes the region where the target to be processed is located in the first screenshot picture into a transparent region to obtain M second screenshot pictures, wherein M is a positive integer less than or equal to N; superposing the M second screen capture pictures to obtain a third screen capture picture; when the M is equal to 2, displaying information of a non-transparent area of the second screenshot picture superposed on the uppermost layer and information of a first area corresponding to the transparent area of the second screenshot picture superposed on the uppermost layer in the third screenshot picture; the first area is an area corresponding to the transparent area of the uppermost second screenshot picture at the position on the next second screenshot picture; the information of the first area comprises the information of the non-transparent area in the first area; when M is larger than 2, the information of the first area also comprises the information of a second area corresponding to the transparent area of the first area; the second area is an area corresponding to the transparent area of the first area in the position on the second screenshot picture of the next layer; the information of the second area comprises information of a non-transparent area of the second area; and the like until no transparent area exists in the third screenshot picture, or the second screenshot picture of the next layer is the second screenshot picture superposed on the lowest layer.
In the above embodiment, after the user performs the screen capturing operation, the electronic device can perform multiple screen capturing operations on the video. And carrying out target detection on each first screenshot picture obtained by multiple screenshots, detecting the region where the figure serving as the target to be processed is located, processing the region into a transparent region, and obtaining a plurality of second screenshot pictures. Since the target to be processed moves during multiple screenshots, the position of the target to be processed is different relative to the background (such as a blackboard) in the video, and the information in the background which is shielded and exposed by the target to be processed is also different. The electronic equipment superposes a plurality of second screen shots containing different information. After the pictures are overlapped, the transparent area of the upper layer picture can display the information of the corresponding area in the lower layer picture. Therefore, the finally superimposed third screenshot may display, in addition to the information superimposed on the uppermost second screenshot, information superimposed on a corresponding area on the lower second screenshot displayed in the transparent area of the uppermost second screenshot. It is understood that if the corresponding area on the underlying second screenshot is a transparent area, the transparent area can display information of the corresponding area on the underlying second screenshot. Therefore, more information can be displayed on the third screen shots than the second screen shots and the first screen shots. The blocking of the background by people during video screen capture is avoided, so that more information is displayed on the screen capture picture.
With reference to the second aspect, in some embodiments, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: determining a starting time of a first time period and a stopping time of the first time period in response to a screen capturing operation of a user; and carrying out N times of screen capturing on the video in the first time period to obtain N first screen capturing pictures.
In the embodiment of the application, the electronic device can determine the starting time and the stopping time of the first time period according to the screen capturing operation of the user, and the user controls the screen capturing starting and stopping time, so that the finally obtained third screen capturing picture can better meet the user expectation.
Optionally, determining the start-stop time of the first time period may be based on different user operations:
in some embodiments in combination with some embodiments of the second aspect, the screen capture operation comprises a first operation and a second operation that are not completed consecutively; the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: determining a starting moment of the first time period in response to the first operation of the user; in response to the second operation by the user, a stop time of the first period of time is determined.
In the embodiment of the application, the start-stop time is determined by different discontinuous operations, so that the determination process of the first time period is simpler.
In some embodiments in combination with some embodiments of the second aspect, the screen capture operation comprises a series of operations performed in succession; the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: responding to the operation of triggering screen capture by a user, and displaying a time period selection control; and determining the starting time and the stopping time of the first time period in response to the operation of the user on the time period selection control.
In the embodiment of the application, the start-stop time is determined from the time selection control by the user through a series of continuous operations, so that the determination of the first time period is more accurate.
Optionally, there are many ways for the electronic device to capture the video N times in the first time period:
with reference to some embodiments of the second aspect, in some embodiments, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: and carrying out N times of screen capturing on the video within the first time period according to a preset time period to obtain N first screen capturing pictures.
In the embodiment of the application, the electronic equipment performs screen capturing within the first time period according to the preset time period, so that the time interval between different screen capturing pictures is ensured, and the probability of obtaining more information in the third screen capturing picture is improved.
With reference to some embodiments of the second aspect, in some embodiments, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: performing target detection on the video within the first time period; and when the position change of the target to be processed in the video is determined to exceed the preset distance threshold, performing screen capture once to generate a first screen capture picture until the N first screen capture pictures are obtained.
In the embodiment of the application, the electronic device performs screen capturing when the moving position of the target to be processed exceeds the preset distance threshold according to the target detection result, so that more information in the background can be exposed in the obtained first screen capturing picture, and more information can be displayed in the third screen capturing picture.
In some embodiments combined with some embodiments of the second aspect, in some embodiments, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: and deleting the first screenshot picture which does not meet the picture quality requirement for target detection in the N first screenshot pictures.
Optionally, the order of superimposing the second screenshot picture may be obtained in multiple ways:
with reference to some embodiments of the second aspect, in some embodiments, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: and superposing the M second screen capture pictures according to a preset default sequence to obtain a third screen capture picture.
In some embodiments, in combination with some embodiments of the second aspect, the preset default order is an order in which temporally subsequent pictures are superimposed in the video.
In the above embodiment, the second screenshot picture is superimposed according to the preset default order, so that the superimposition processing process is faster. And superposing the pictures with the later playing time in the video according to a preset default sequence, so that the finally obtained information displayed in the third screen-cut picture is the latest information.
With reference to some embodiments of the second aspect, in some embodiments, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: displaying the superposition sequence of the M second screen capture pictures; and responding to the operation of adjusting the stacking sequence by the user, and stacking the M second screen capturing pictures according to the stacking sequence adjusted by the user to obtain a third screen capturing picture.
In the embodiment of the application, the stacking sequence can be automatically adjusted by a user, so that the third screen-cut picture obtained by stacking better meets the requirements of the user.
In some embodiments combined with some embodiments of the second aspect, in some embodiments, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: and filling a transparent area in the third screen picture by using the background color in the third screen picture.
In the embodiment of the application, if the third screenshot picture also has a transparent region, the transparent region can be filled with a background color, so that the attractiveness of the display of the third screenshot picture is improved.
In some embodiments combined with some embodiments of the second aspect, in some embodiments, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: and filling and completing the transparent area in the third screenshot picture by using an intelligent repairing technology.
In the embodiment of the application, if a transparent region is also arranged in the third screenshot picture, the transparent region in the third screenshot picture can be filled and completed by using an intelligent repairing technology, so that the attractiveness of the display of the third screenshot picture is improved.
In some embodiments combined with some embodiments of the second aspect, in some embodiments, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: performing semi-transparent processing on an area where a target to be processed is located in a first screenshot picture corresponding to a second screenshot picture superposed on the uppermost layer to obtain a semi-transparent target to be processed; and superposing the semitransparent target to be processed on the third screen capture picture to obtain a fourth screen capture picture.
In the embodiment of the application, the semitransparent target to be processed is superposed on the third screen shot, so that more information in the background can be obtained, and the action and the posture of the target to be processed can be seen. The amount of information obtained is further improved.
In some embodiments in combination with some embodiments of the second aspect, the video is an instructional video and the object to be processed is a teacher.
In a third aspect, an embodiment of the present application provides a chip system, where the chip system is applied to an electronic device, and the chip system includes one or more processors, and the processor is configured to invoke a computer instruction to cause the electronic device to perform a method as described in the first aspect and any possible implementation manner of the first aspect.
It is understood that the system-on-chip may include one processor 110 in the electronic device 100 shown in fig. 3, or may include a plurality of processors 110 in the electronic device 100 shown in fig. 3. The chip system may further include one or more other chips, for example, an image signal processing chip in the camera 193 of the electronic device 100 shown in fig. 5, an image display chip in the display screen 194, and the like, which are not limited herein.
In a fourth aspect, embodiments of the present application provide a computer program product including instructions, which, when run on an electronic device, cause the electronic device to perform the method described in the first aspect and any possible implementation manner of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, which includes instructions that, when executed on an electronic device, cause the electronic device to perform the method described in the first aspect and any possible implementation manner of the first aspect.
It is understood that the electronic device provided by the second aspect, the chip system provided by the third aspect, the computer program product provided by the fourth aspect, and the computer storage medium provided by the fifth aspect are all used to execute the method provided by the embodiments of the present application. Therefore, the beneficial effects achieved by the method can refer to the beneficial effects in the corresponding method, and are not described herein again.
Drawings
FIG. 1 is a diagram illustrating an effect of a screenshot picture obtained by screenshot of a teaching video in the prior art;
FIG. 2 is a schematic diagram illustrating an effect of a screenshot picture obtained by screenshot of a teaching video in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
FIG. 4 is a block diagram of a software architecture of an electronic device according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart of a screen capture method in an embodiment of the present application;
FIG. 6 is an interface diagram of a set of screen capture operations provided by an embodiment of the present application;
FIG. 7 is a schematic interface diagram of another set of screen capture operations provided by an embodiment of the present application;
FIG. 8 is a diagram of an exemplary scenario for determining an area where an object to be processed is located in an embodiment of the present application;
FIG. 9 is a diagram of an exemplary scenario of a transparent area in an embodiment of the present application;
FIG. 10 is a schematic illustration of a set of interfaces provided in an embodiment of the present application;
FIG. 11 is an exemplary diagram illustrating a process of one screen capture method in an embodiment of the present application;
fig. 12 is an exemplary schematic diagram of a process of superimposing a second screenshot in an embodiment of the present application;
FIG. 13 is a schematic view of another set of interfaces provided in embodiments of the present application;
FIG. 14 is a schematic view of another set of interfaces provided in embodiments of the present application;
fig. 15 is an exemplary schematic diagram of superimposing a translucent object to be processed on a third screen shot in the embodiment of the present application.
Detailed Description
The terminology used in the following embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the specification of the present application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the listed items.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
Since the embodiments of the present application relate to the application of image processing technology, for the sake of understanding, the related terms and concepts related to the embodiments of the present application will be described below.
(1) The target detection algorithm based on deep learning comprises the following steps:
deep learning is the intrinsic law and expression hierarchy of sample data for machine learning, and information obtained in the learning process is very helpful for interpretation of data such as characters, images and sounds. The machine can have the analysis and learning capability like a human, and can recognize data such as characters, images and sounds.
Object detection algorithms based on deep learning may enable electronic devices to automatically find objects in images, determining their location and size.
It can be understood that, the target detection algorithm based on deep learning is used for target detection, and a labeled sample is required to be used as a training set to train the target detection algorithm to meet the requirements of the target detection algorithm.
The target detected by the target detection algorithm is determined by the labeled samples used for training. For example, in embodiments of the present application, a series of images tagged on a person may be used to train the object detection algorithm. So that the object detection algorithm can identify the person in the image and determine the position and body area of the person in the image.
According to different requirements of the target detection use scenario, the person in the embodiment of the present application may include only a person, and may also include a person and an object related to the person, such as an article in contact with the body of the person, and is not limited herein. For different target demand conditions, only the sample marked with the label on the target to be identified is used as a training set to train a corresponding target detection algorithm.
For example, in the embodiment of the present application, the persons in the training set may be a teacher and objects in physical contact with the teacher. In the embodiment of the present application, a target of target detection is referred to as a target to be processed.
The target detection algorithm in the embodiment of the application may be preset in factory, or may be trained by a user using a training set of the user, which is not limited herein.
Fig. 1 is a schematic diagram illustrating an effect of a screenshot picture obtained by screenshot of a teaching video in the prior art. When the teaching video is subjected to screen capture, a teacher always shields the background, so that less information is displayed by the screen capture picture.
In the embodiment of the application, after the user performs screen capture operation, the electronic equipment can automatically perform multiple screen captures on the teaching video. As the character in the teaching video moves, the previously blocked area is exposed. The electronic equipment obtains the final screen capture picture by splicing the areas exposed due to movement in the screen capture pictures obtained by multiple screen captures, so that more information can be displayed in the final screen capture picture. Fig. 2 is a schematic diagram illustrating an effect of a screenshot picture obtained by screenshot of a teaching video in the embodiment of the present application.
An exemplary electronic device 100 provided by embodiments of the present application is first described below.
Fig. 3 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application.
The following describes an embodiment specifically by taking the electronic device 100 as an example. It should be understood that electronic device 100 may have more or fewer components than shown, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The SIM interface may be used to communicate with the SIM card interface 195, implementing functions to transfer data to or read data from the SIM card.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application (such as a face recognition function, a fingerprint recognition function, a mobile payment function, and the like) required by at least one function, and the like. The storage data area may store data (such as face information template data, fingerprint information template, etc.) created during the use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication.
Fig. 4 is a block diagram of the software configuration of the electronic apparatus 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the system is divided into four layers, from top to bottom, an application layer, an application framework layer, runtime and system libraries, and a kernel layer.
The application layer may include a series of application packages.
As shown in fig. 4, the application package may include applications (also referred to as applications) such as a screen capture module, a camera, a gallery, a calendar, a call, a map, a navigation, a WLAN, bluetooth, music, a video, a short message, etc.
The screen capture module can be used for supporting the electronic device to execute the screen capture method in the embodiment of the application.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 4, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog interface. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The runtime includes a core library and a virtual machine. The runtime is responsible for scheduling and management of the system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of the system.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), two-dimensional graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provides a fusion of two-Dimensional (2-Dimensional, 2D) and three-Dimensional (3-Dimensional, 3D) layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing 3D graphic drawing, image rendering, synthesis, layer processing and the like.
The two-dimensional graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver and a virtual card driver.
The following describes exemplary workflow of the software and hardware of the electronic device 100 in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 193.
The following describes the screen capture method in the embodiment of the present application in detail with reference to the software and hardware structure of the above exemplary electronic device 100. FIG. 5 is a schematic flow chart of a screen capture method in the embodiment of the present application:
s501, responding to screen capturing operation of a user, and carrying out N times of screen capturing on the video in a first time period by the electronic equipment to obtain N first screen capturing pictures;
wherein N is a positive integer not less than 2.
It is understood that the start and end times of the first period are determined according to a user's screen capturing operation. The first time period may be determined in a different manner according to the user's screen capturing operation, and is not limited herein.
The following exemplarily describes a manner of determining the first period of time according to a screen capture operation of a user:
illustratively, the screen capture operation of the user may include a first operation and a second operation that are not completed consecutively. In response to a first operation by the user, the electronic device may determine a starting time of the first time period. In response to a second operation by the user, the electronic device may determine a stop time of the first time period. It is understood that the electronic device starts when determining the starting time of the first time period, i.e., may start to perform S501 and the subsequent steps. The execution of S501 and subsequent steps is not necessarily started at the stop time at which the first time period is determined.
Fig. 6 is a schematic interface diagram of a set of screen capturing operations provided in the embodiment of the present application.
Fig. 6 (a) is an exemplary diagram of the user interface 61 when the instructional video is played on the display screen 194 of the electronic device.
As shown in fig. 6 (b), an exemplary diagram of the user interface 62 after the screen capture function is triggered for the user. When the user triggers the screen capture function, a screen capture mode option indicator 601 may be displayed in the user interface 62, which may include a plurality of screen capture mode selection controls below: such as a normal screenshot control 601A, a long-image screenshot control 601B, a first-mode screenshot control 601C, a second-mode screenshot control 601D, and so on.
The operation mode of the user triggering the screen capture function may be many, for example, the operation mode may be through triggering a combination key, or through triggering a preset touch action, or through triggering a preset number of times of hitting the screen, and the like, which is not limited herein.
The common screen capture control 601A is used for providing a common screen capture function after receiving a user click operation;
the long image screen capture control 601B is used for providing a long image screen capture function after receiving a user click operation;
the first mode screen capture control 601C is used for providing a video screen capture function after receiving a click operation of a user, and determining the time of the click operation as the starting time of a first time period;
and the second scheme screen capture control 602C is configured to provide a video screen capture function after receiving a user click operation, and provide a time period selection control for the user to select the first time period.
It will be appreciated that this is merely one illustrative example of a screen capture option. In practice, the screen capture option may be displayed in the user interface 62 in other manners as well. The different screen capturing modes available under the screen capturing mode option can be more or less, and are not limited herein.
As shown in fig. 6 (C), an exemplary diagram of a first-mode screenshot control 601C is clicked for a user. When the user clicks the first-mode screen capture control 601C in the user interface 62, the electronic device may display the user interface 63 as shown in (d) of fig. 6 on the display screen 194.
It will be appreciated that the user-triggered operation of the screen capture function shown in fig. 6 (b), and the user-clicked first-mode screen capture control 601C shown in fig. 6 (C) together pertain to one implementation of the first one of the user's screen capture operations described above. In practical applications, the first operation may also be a single operation. For example, some sort of combination key may be set, or a preset touch action may be set, which may directly trigger the function of the first-mode screen capture control. Then, when the user interface 61 shown in (a) in fig. 6 is displayed on the display screen 194 of the electronic apparatus, the electronic apparatus may display the user interface 63 shown in (d) in fig. 6 on the display screen 194 in response to this first operation by the user while the teaching video is being played. And is not limited herein.
As shown in (d) of fig. 6, which is an exemplary diagram of the user interface 63 in the screen shot state. The user interface 63 may have a screenshot status indicator 602, a cancel control 603, and a stop control 604 displayed therein.
The screenshot status indicator 602 has displayed therein the text: in the screen capture, the screen capture device is used for indicating that the screen capture device is currently in a video screen capture state;
the cancel control 603 is configured to cancel the screen capture operation after receiving the user click operation;
the stop control 604 is configured to determine that the click time is the stop time of the first time period after receiving a click operation of a user, and complete a video capture function.
As shown in (e) of fig. 6, upon receiving an operation of the user clicking the stop control 604 in the user interface 63, the display screen 194 of the electronic device may display the user interface 64 as shown in (f) of fig. 6.
It is understood that the operation of the click stop control 604 is an implementation of the second operation in the screen capturing operation of the user.
As shown in (f) of fig. 6, a screen capture status indicator 605 may be displayed in the user interface 64. The screen capture status indicator 605 displays the text: and the screen capture is successful, and the screen capture is used for indicating that the video screen capture is successful.
It is to be understood that the first and second operations illustrated in fig. 6 are merely one illustrative implementation. In practical applications, the first operation and the second operation may have many other implementations, as long as the first operation can determine the starting time of the first time period, and the second operation can determine the stopping time of the first time period, which is not limited herein.
Illustratively, the screen capturing operation of the user can also be continuously completed in one time. In response to a user's screen capture operation, the electronic device can provide a time period selection control for the user to select a start-stop time of the first time period. After the user selects and determines the starting and ending time of the first time period, S501 and the subsequent steps may be continuously executed.
Fig. 7 is a schematic interface diagram of another set of screen capturing operations provided in the embodiment of the present application.
Fig. 7 (a) and (b) are similar to fig. 6 (a) and (b), and are not repeated here.
As shown in (c) of fig. 7, when it is received that the user clicks the second mode screen capture control 601D in the user interface 62, the user interface 73 shown in (D) of fig. 7 may be displayed on the display screen 194 of the electronic device.
In the user interface 73, a time selection control 701, a start time flag 702, a stop time flag 703, a determination control 704, and a cancel control 705 may be included.
Wherein, the time selection control 701 is used for showing the time line of the video.
A start time flag 702, configured to determine a start time of a first time period according to dragging by a user;
a stop time flag 703, configured to determine a stop time of a first time period according to dragging by a user;
the determining control 704 is used for determining the start-stop time of the first time period according to the positions of the start time zone bit 702 and the stop time zone bit 703 on the time selecting control 701 after receiving the clicking operation of the user, and completing the video screenshot function;
and a cancel control 705, configured to cancel the screen capture operation after receiving the user click operation.
As shown in (d) of fig. 7, the user may drag the start time flag 702 and the stop time flag 703 to determine the start time and the stop time of the first time period.
As shown in fig. 7 (e), after the user drags the positions of the start time flag 702 and the stop time flag 703, the user may click the determination control 704 to determine the start time and the stop time of the first time period. At this point, the electronic device continues to complete the video capture function and displays the user interface 74 as shown in (f) of fig. 7.
As shown in (f) of fig. 7, a screen capture status indicator 706 may be displayed in the user interface 74. The screenshot status indicator 706 has displayed therein the text: and the screen capture is successful, and the screen capture is used for indicating that the video screen capture is successful.
It should be understood that fig. 7 is merely an example in which the screen capturing operation is a continuous screen capturing operation, and in practical applications, there may be many other screen capturing operations as long as the start and stop time of the first time period can be determined, and the method is not limited herein.
There are many alternative ways for the electronic device to capture the video in the first time period N times:
for example, the electronic device may capture a screen during the first time period according to a preset time period. For example, assuming that the preset time period is 2 seconds, the electronic device may capture a video every 2 seconds during the first time period to generate a first screenshot picture.
For example, after the screen capture function is started, the electronic device may also perform real-time target detection on the video in the first time period, and perform a screen capture each time it is determined that the position change of the detected target to be processed in the video exceeds a preset distance threshold, so as to generate a first screen capture picture.
It will be appreciated that there are many ways to capture the video in the first time period N times, for example, a user may capture a screen once each time the user receives a user click in the first time period, a first screenshot may be generated, and the like, and the present invention is not limited thereto.
It is understood that, in step S501, a plurality of first screener pictures are continuously generated in the first time period. The electronic device does not need to wait for the completion of the generation of all the N screen shots in the first time period before executing the subsequent steps. The subsequent steps may be started to be executed at the same time after the first screenshot picture is generated, which is not limited herein.
S502, the electronic equipment performs target detection on the first screenshot picture and determines an area where a target to be processed is located;
after the electronic equipment obtains the first screenshot picture, target detection can be carried out on the first screenshot picture, and the target detection can automatically identify the target to be processed in the first screenshot picture, so that the area of the target to be processed in the picture is determined.
The target detection algorithm used by the electronic device 100 to perform target detection may be stored in the internal memory 121 of the electronic device 100 preset by a manufacturer, may be downloaded from a cloud server when the electronic device 100 needs to use the target detection algorithm, may be stored in the internal memory 121 after the processor 110 of the electronic device 100 is trained through a training set selected by a user, may be obtained from servers of other third-party vendors, and the like, which is not limited herein.
Illustratively, as shown in fig. 8, an exemplary scene diagram for determining the region where the object to be processed is located is shown. Fig. 8 (a) shows a first screenshot obtained by capturing a video during the first time period, wherein the teacher stands in front of the background. And the electronic equipment performs target detection on the first screenshot picture, and the target to be processed of the used target detection algorithm is a teacher, so that the region of the teacher in the first screenshot picture can be determined. As shown in fig. 8 (b), the region indicated by the dashed line box is a region where the teacher determined by the target detection is located, that is, a region where the target to be processed is located.
Optionally, the electronic device may obtain N first screenshot pictures in the first time period, and in the process of performing target detection on the N first screenshot pictures, if a certain first screenshot picture does not meet a picture quality requirement for performing target detection due to a screenshot quality problem, the electronic device may delete the first screenshot picture and continue to process subsequent first screenshot pictures.
S503, the electronic equipment processes the region where the target to be processed is located in the first screenshot picture into a transparent region to obtain M second screenshot pictures;
after the electronic device determines the region where the target to be processed is located in the first screenshot picture, the region can be processed into a transparent region, and M second screenshot pictures are obtained. Wherein M is a positive integer less than or equal to N.
It can be understood that, if the first screenshot picture that does not meet the picture quality requirement for performing the target detection is deleted when the electronic device performs the target detection, M is smaller than N. And if the electronic equipment does not delete the first screenshot picture, M is equal to N.
It should be noted that, after the region where the target to be processed is located in the first screenshot picture is processed into the transparent region, and a second screenshot picture is obtained, if the second screenshot picture is superimposed on a certain picture, the content located in the transparent region on the picture below the second screenshot picture can be seen through the transparent region on the second screenshot picture. Fig. 9 is a schematic diagram of an exemplary scene of a transparent area. As shown in fig. 9 (a), the second screenshot is displayed on the display screen 194 of the electronic device 100, wherein the region enclosed by the dashed box is a transparent region. As shown in fig. 9 (b), another picture a is shown, in which text information EF and 1234 is present in the position portion corresponding to the transparent area of the second screenshot, and some text information is present in the other portion. If the second screenshot picture shown in (a) of fig. 9 is shown above and the second screenshot picture a shown in (B) of fig. 9 is shown below, then the second screenshot picture B shown in (c) of fig. 9 will be displayed on the display screen 194 of the electronic device 100. In this picture B, not only the content on the second screenshot picture but also the content at the position corresponding to the transparent area of the second screenshot picture in the picture a is included.
S504, overlapping the M second screen capture pictures to obtain a third screen capture picture;
after the electronic equipment obtains two or more second screen capture pictures, the second screen capture pictures can be overlapped. And when the electronic equipment completes superposition of the M second screen capture pictures, a third screen capture picture is obtained. More information is displayed in the third screenshot than in each first screenshot, the information being information displayed in the background and not including the target to be processed.
When the M is equal to 2, displaying information of a non-transparent area of the second screenshot picture superposed on the uppermost layer and information of a first area corresponding to the transparent area of the second screenshot picture superposed on the uppermost layer in the third screenshot picture; the first area is an area corresponding to the transparent area of the uppermost second screenshot picture at the position on the next second screenshot picture; the information of the first area comprises the information of the non-transparent area in the first area;
when M is larger than 2, the information of the first area also comprises the information of a second area corresponding to the transparent area of the first area; the second area is an area corresponding to the transparent area of the first area in the position on the second screenshot picture of the next layer; the information of the second area comprises information of a non-transparent area of the second area; and the like until no transparent area exists in the third screenshot picture, or the second screenshot picture of the next layer is the second screenshot picture superposed on the lowest layer.
It is to be understood that the order of superimposing the M second screenshot pictures is not limited herein. After two second screen capture pictures are generated, overlapping can be started, and then the newly obtained second screen capture pictures and the screen capture pictures obtained through overlapping are overlapped; or all the second screen capture pictures can be superposed together after being obtained; and overlapping part of the obtained second screen capture pictures respectively, and overlapping the pictures obtained after overlapping respectively, which is not limited herein.
Optionally, when the electronic device superimposes the M second screen shots, different superimposing orders may be available according to different situations.
Illustratively, after the user clicks the screen capture setting control in the setting options of the electronic device, the display screen 194 of the electronic device may display the user interface 101 as shown in fig. 10 (a). The user interface 101 is a screen capture setting interface, which may include a screen capture function switch control 1001, a video screen capture overlay sequence selection control 1002, and a screen capture shortcut gesture selection control 1003.
The screen capture function switch control 1001 is used for determining whether to start a screen capture function according to user selection;
the video screenshot stacking sequence selection control 1002 is used for displaying a video screenshot stacking sequence selection interface after receiving a click operation of a user;
and the screen capture shortcut gesture control 1003 is configured to display a selection interface of the screen capture shortcut gesture after receiving a click operation of the user.
As shown in (a) of fig. 10, upon receiving an operation of the user clicking the video-screenshot-superimposition-order-selection control 1002, the user interface 102 shown in (b) of fig. 10 may be displayed on the display screen 194 of the electronic device. The user interface 102 is a video screenshot overlay order selection interface.
A manual adjustment switch control 1004, a default order selection control 1005, may be included in the user interface 102. There are two option sub-controls in the default order selection control 1005, including an option sub-control 1005A with a later picture in play time and an option sub-control 1005B with a previous picture in play time.
The manual adjustment switch control 1004 is configured to turn on and off a manual adjustment function according to a user operation.
When the manual adjustment function is turned off, when the electronic device superimposes M second screen shots in S504, the electronic device directly superimposes the second screen shots according to the order represented by the selected sub-control in the default order selection control 1005.
When the manual adjustment function is started, after the electronic device obtains N first screenshot pictures, the electronic device may select, according to the default order, the order represented by the selected child controls in the control 1005, display the N first screenshot pictures obtained by screenshot, or display the M second screenshot pictures obtained by processing. After the user manually adjusts the sequence and determines it, step S504 is executed.
It should be noted that, when the electronic device displays the N first screenshot pictures obtained by the screenshot for the user to manually adjust the sequence, the electronic device is not affected to continue to execute steps S502 and S503, which is not limited herein.
It can be understood that, because the M second screenshots are processed by the N first screenshots, the manual adjustment of the order of the N first screenshots by the user may represent the adjustment of the order of superimposing the M second screenshots by the user.
The option sub-control 1005A with the later picture in playing time is used for setting that when the second screenshot picture is superimposed after the click operation of the user is received, the second screenshot picture is superimposed according to the order in which the later picture in playing time is superimposed. Displaying N first screen capturing pictures obtained by screen capturing or displaying M second screen capturing pictures obtained by processing according to the sequence of overlapping the pictures with the later playing time when the manual adjustment function is in an opening state;
the option sub-control 1005B on the picture before the playing time is used for, after the click operation of the user is received, setting that when the second screen capture picture is superimposed, the pictures before the playing time are superimposed according to the order in which the pictures before the playing time are superimposed. And displaying the N first screen capturing pictures obtained by screen capturing or displaying the M second screen capturing pictures obtained by processing according to the sequence of the pictures with the prior playing time superposed on the pictures when the function is manually adjusted to be in the starting state.
As shown in fig. 10 (b), if the user selects the manual adjustment switch control 1004 to be in the off state in the video screenshot stacking order selection interface, the option sub-control 1005A of the default order selection control 1005, which is on the picture with the later playing time, is selected. Then, when step S504 is executed, the electronic device may default to sequentially superimpose the M second screen shots according to the order of the pictures with the later playing time, so as to obtain a third screen shot.
It will be appreciated that FIG. 10 is merely one exemplary illustration of a screenshot setup interface and a video screenshot overlay order selection interface. In practical applications, the screen capture setting interface may have more or fewer options than shown in (a) of fig. 10. Similarly, the video screenshot and superimposition order selection interface may or may not be available. If there is a video screenshot stacking order selection interface, there may be more or less options than those shown in fig. 10 (b), and this is not limited herein.
Taking the screen capture setting interface in the electronic device as shown in fig. 10 (a), the video screen capture superimposition order selection interface as shown in fig. 10 (b), and 3 second screen capture pictures obtained in step S503 as an example, a process of superimposing the 3 second screen capture pictures in step S504 to obtain a third screen capture picture is exemplarily described below:
fig. 11 is an exemplary schematic diagram of a processing procedure of a screen capture method in an embodiment of the present application. In step S501, in a first time period, the electronic device performs screen capturing to obtain 3 first screen capturing pictures, which are picture 1, picture 2, and picture 3 respectively according to the sequence of the playing time of the pictures in the video from front to back.
In picture 1, there are two strings on the blackboard, ABCDE and 123456, but the body of the teacher blocks the 123 part of the string 123456, so the complete content of the two strings cannot be seen in picture 1.
In picture 2, a character string is added to the blackboard: FGHIJK. The teacher moved the position and the picture could show the ABCDE and FGHIJK strings on the blackboard completely, but the body of the teacher blocked the 3456 portion of the 123456 string. Therefore, the complete content of the current 3 character strings cannot be seen in picture 2.
In picture 3, a new character string is added on the blackboard: 789. the teacher has moved the position again and the picture can show ABCDE and 789 strings on the blackboard completely, but the body of the teacher blocks 1 part of the 123456 string and the JK part of the FGHIJK string. Therefore, the complete content of the current 4 character strings cannot be seen in picture 3.
According to steps S502 and S503, the electronic device may identify the areas where the teacher is located in picture 1, picture 2, and picture 3 through target detection, and process these areas into transparent areas to obtain 3 second screenshot pictures, which are: picture 1A, picture 2A, and picture 3A.
Since the option sub-control 1005A with the picture with the later playing time on is selected in the video screenshot stacking order interface, when step S504 is executed, the electronic device sequentially stacks the obtained second screenshot pictures according to the manner that the picture with the later playing time on.
As shown in fig. 11, after the second screenshot picture 1A and the second screenshot picture 2A are obtained, the playing time of the picture 2 corresponding to the picture 2A is located behind the picture a corresponding to the picture 1A in the video. Therefore, when the electronic device superimposes the picture 2A and the picture 1A, the picture 2A is placed on top of the picture 1A, and the picture 4 is obtained as shown in fig. 10. In the picture 4 obtained after superposition, ABCDE and FGHIJK character strings can be completely displayed, and as for the 123456 character string, more contents can be displayed than that of the picture 1 and the picture 2, only 3 of the 123456 character string cannot be displayed in the picture 4 because the picture 1 and the picture 2 are both blocked by teachers.
As shown in fig. 12, a schematic process diagram of generating a picture 4 for superimposing the picture 1A and the picture 2A is shown. Picture 1A has a transparent area 1 and picture 2A has a transparent area 2. The picture 2A is on top, and the picture 1A is on the bottom to obtain the picture 4. Therefore, not only the entire contents of the picture 2A but also the contents of the transparent area 2 of the picture 2A can be displayed in the picture 4 at the position corresponding to the transparent area in the picture 1A. The transparent area 2 in picture 2A and the transparent area 1 in picture 1A overlap each other, and the transparent area is also a transparent area, and is referred to as a transparent area 3 in picture 4.
As shown in fig. 11, after the second screenshot picture 3A is obtained, the playing time of the picture 3 corresponding to the picture 3A is located behind the picture 2 corresponding to the picture 2A in the video. Therefore, when the electronic device superimposes the picture 3A and the picture 4, the picture 5 in fig. 11 is obtained by superimposing the picture 3A on top and the picture 4 on bottom. In the superimposed picture 5, ABCDE, FGHIJK, 789 and 123456 character strings can be displayed completely. More content is displayed than in each of picture 1, picture 2 and picture 3. At this time, all the obtained 3 second screen capture pictures are overlapped, and the obtained picture 5 is the third screen capture picture.
For example, as shown in fig. 13, if the user sets the state of the manual adjustment switch control 1004 to be activated in the video screenshot stacking order selection interface, the electronic device defaults to the first screenshot picture for the user to select the stacking order. Then in step S501, after the user performs the screen capturing operation and the user interface shown in (a), (b), (c), (d), (e) in fig. 6 or (a), (b), (c), (d), (e) in fig. 7 is displayed on the display screen 194 of the electronic device, the user interface 141 shown in (a) in fig. 14 may be further displayed on the display screen 194 of the electronic device before (f) in fig. 6 or (f) in fig. 7 is displayed. In this user interface 141, the electronic apparatus pops up a superimposition order selection window 1401.
The overlay order selection window 1401 may include a text identifier: the overlay order is selected (top to bottom) to indicate to the user that this window is available to adjust the overlay order of the video capture pictures and that the pictures are displayed top to bottom in the overlay order. Since the option sub-control 1005A of the default order selection control 1005, which is on the picture with the later playing time, is in the selected state as shown in fig. 13, the display order of the 4 first screenshot pictures for adjusting the order displayed in fig. 14 is: a first screenshot captured at 1 hour 20 of the video, a first screenshot captured at 1 hour 16 of the video, a first screenshot captured at 1 hour 13 of the video, and a first screenshot captured at 1 hour 8 of the video.
A determination control 1402 for determining the stacking order adjusted by the user can be further included in the stacking order selection window 1401; and a cancel control 1403 for canceling the order of superimposition adjusted by the user, directly using the default order.
As shown in (a) of fig. 14, in response to an operation in which the user drags the first screenshot taken at the 1 hour 20 minute of the video and the first screenshot taken at the 1 hour 8 minute of the video to be interchanged, the user interface 141 shown in (b) of fig. 14 is displayed on the display screen 194 of the electronic device. At this time, the order of the two first screenshot pictures is changed, and when the second screenshot picture overlaying process is performed in step S504, the electronic device overlays the corresponding second screenshot pictures according to the order adjusted by the user.
As shown in fig. 14 (b), in response to an operation of the user clicking on the determination control 1402, the electronic device may display a user interface as shown in fig. 6 (f) or fig. 7 (f) on the display screen.
It is understood that there may be other ways to determine the order used when the M second screen shots are superimposed, and the order is not limited herein.
Optionally, after the electronic device completes the superposition of the M second screen capture pictures to obtain a third screen capture picture, the third screen capture picture may be stored in the internal memory 121 of the electronic device and used as the finally obtained screen capture picture in the screen capture operation. The third screenshot picture may also be output to the display screen 194 of the electronic device as a screenshot picture finally obtained by the screenshot operation. And is not limited herein.
Optionally, before storing or outputting the third screenshot picture, if a transparent region still exists in the obtained third screenshot picture, the electronic device may fill the transparent region with a background color of the picture, or may fill and complete the transparent region with an intelligent picture repairing technology, which is not limited herein.
Optionally, before storing or outputting the third screenshot picture or after storing or outputting the third screenshot picture, the electronic device may further perform a semi-transparent processing on the region where the target to be processed is located in the first screenshot picture corresponding to the second screenshot picture superimposed on the uppermost layer, so as to obtain a semi-transparent target to be processed; and then, superposing the semitransparent target to be processed on the third screen capture picture to obtain a fourth screen capture picture, and storing the fourth screen capture picture.
The translucency degree of the area where the target to be processed is located can be set according to a preset value, can be manually adjusted in real time by a user, and can be intelligently adjusted by the electronic device according to the third screenshot picture and the content of the area where the target to be processed is located so as not to affect the degree of displaying the content of the third screenshot picture, and the degree is not limited here.
When the semitransparent target to be processed is superimposed on the third screenshot picture, the semitransparent target to be processed may be superimposed at the same position of the target to be processed corresponding to the semitransparent target to be processed on the first screenshot picture, or may be superimposed at a position specified by the user on the third screenshot picture, which is not limited herein.
In combination with the exemplary diagram shown in fig. 11, as shown in fig. 15, an exemplary diagram of superimposing a semitransparent target to be processed on a third screen shot in the embodiment of the present application is shown. The second screenshot picture 3A superimposed on the uppermost layer corresponds to the first screenshot picture 3, and the electronic device may perform a semitransparent process on the area where the target to be processed in the first screenshot picture 3 is located, so as to obtain a semitransparent target to be processed. And then, superposing the semitransparent target to be processed on the third screen shot picture 5 to obtain a picture 6, namely a fourth screen shot picture. In this fourth screenshot, not only the entire character string on the background can be displayed in its entirety, but also the translucent posture of the teacher can be seen.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
As used in the above embodiments, the term "when …" may be interpreted to mean "if …" or "after …" or "in response to a determination of …" or "in response to a detection of …", depending on the context. Similarly, depending on the context, the phrase "at the time of determination …" or "if (a stated condition or event) is detected" may be interpreted to mean "if the determination …" or "in response to the determination …" or "upon detection (a stated condition or event)" or "in response to detection (a stated condition or event)".
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.

Claims (29)

1. A screen capture method, comprising:
in response to a screen capturing operation of a user, the electronic equipment performs N times of screen capturing on a video in a first time period to obtain N first screen capturing pictures, wherein N is a positive integer not less than 2;
the electronic equipment carries out target detection on the first screenshot picture, and determines the area of a target to be processed in the first screenshot picture, wherein the target to be processed is the target of the target detection;
the electronic equipment processes the region where the target to be processed is located in the first screenshot picture into a transparent region to obtain M second screenshot pictures, wherein M is a positive integer less than or equal to N;
the electronic equipment superposes the M second screen capture pictures to obtain a third screen capture picture;
when the M is equal to 2, displaying information of a non-transparent area of the second screenshot picture superposed on the uppermost layer and information of a first area corresponding to a transparent area of the second screenshot picture superposed on the uppermost layer in the third screenshot picture; the first area is an area corresponding to the transparent area of the uppermost layer of second screenshot picture at the position of the next layer of second screenshot picture; the information of the first area comprises information of a non-transparent area in the first area;
when M is larger than 2, the information of the first area further comprises information of a second area corresponding to the transparent area of the first area; the second area is an area corresponding to the transparent area of the first area in the position on the second screenshot picture of the next layer; the information of the second area comprises information of a non-transparent area of the second area; and repeating the steps until no transparent area exists in the third screenshot picture, or the second screenshot picture of the next layer is the second screenshot picture superposed on the lowest layer.
2. The method according to claim 1, wherein the electronic device performs, in response to a screen capture operation of a user, N screen captures of a video within a first time period to obtain N first screen capture pictures, specifically including:
in response to a screen capture operation of a user, the electronic equipment determines a starting time of a first time period and a stopping time of the first time period;
and the electronic equipment captures the video for N times in the first time period to obtain N first screenshot pictures.
3. The method of claim 2, wherein the screen capture operation comprises a first operation and a second operation that are not completed consecutively;
the responding to the screen capture operation of the user, the determining, by the electronic device, the start time and the stop time of the first time period specifically includes:
in response to the first operation of a user, the electronic device determining a starting moment of the first time period;
in response to the second operation by the user, the electronic device determines a stop time of the first time period.
4. The method according to claim 2 or 3, wherein the screen capture operation comprises a series of operations performed consecutively;
the responding to the screen capture operation of the user, the determining, by the electronic device, the start time and the stop time of the first time period specifically includes:
responding to the operation of triggering screen capture by a user, and displaying a time period selection control by the electronic equipment;
in response to a user operation on the time period selection control, the electronic device determines a start time and a stop time of the first time period.
5. The method according to any one of claims 2 to 4, wherein the electronic device performs N screen shots on the video in the first time period to obtain N first screen shots, and specifically includes:
and the electronic equipment performs N screen shots on the video within the first time period according to a preset time period to obtain N first screen shots.
6. The method according to any one of claims 2 to 4, wherein the electronic device performs N screen shots on the video in the first time period to obtain N first screen shots, and specifically includes:
the electronic equipment performs target detection on the video in the first time period;
and when the position change of the target to be processed in the video is determined to exceed a preset distance threshold, performing screen capture once to generate a first screen capture picture until the N first screen capture pictures are obtained.
7. The method according to any one of claims 1 to 6, further comprising:
and the electronic equipment deletes the first screenshot picture which does not meet the picture quality requirement for target detection in the N first screenshot pictures.
8. The method according to any one of claims 1 to 7, wherein the electronic device superimposes the M second screen shots to obtain a third screen shot, specifically including:
and the electronic equipment superposes the M second screen capture pictures according to a preset default sequence to obtain a third screen capture picture.
9. The method of claim 8, wherein the default order is an order in which temporally subsequent pictures are superimposed in the video.
10. The method according to any one of claims 1 to 7, wherein the electronic device superimposes the M second screen shots to obtain a third screen shot, specifically including:
the electronic equipment displays the superposition sequence of the M second screen capture pictures;
and in response to the operation of adjusting the stacking sequence by the user, the electronic equipment stacks the M second screen capture pictures according to the stacking sequence adjusted by the user to obtain a third screen capture picture.
11. The method according to any one of claims 1 to 10, further comprising:
and the electronic equipment fills a transparent area in the third cut-screen picture by using the background color in the third cut-screen picture.
12. The method according to any one of claims 1 to 10, further comprising:
and the electronic equipment fills and completes the transparent area in the third screenshot picture by using an intelligent repairing technology.
13. The method according to any one of claims 1 to 12, further comprising:
the electronic equipment performs semitransparent processing on an area where a target to be processed is located in a first screenshot picture corresponding to a second screenshot picture superposed on the uppermost layer to obtain a semitransparent target to be processed;
and the electronic equipment superposes the semitransparent target to be processed on the third screen capture picture to obtain a fourth screen capture picture.
14. An electronic device, characterized in that the electronic device comprises: one or more processors and memory;
the memory coupled with the one or more processors, the memory to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform:
responding to a screen capturing operation of a user, and performing N times of screen capturing on a video in a first time period to obtain N first screen capturing pictures, wherein N is a positive integer not less than 2;
performing target detection on the first screenshot picture, and determining an area where a target to be processed is located in the first screenshot picture, wherein the target to be processed is a target of the target detection;
processing the region where the target to be processed is located in the first screenshot picture into a transparent region to obtain M second screenshot pictures, wherein M is a positive integer less than or equal to N;
superposing the M second screen capture pictures to obtain a third screen capture picture;
when the M is equal to 2, displaying information of a non-transparent area of the second screenshot picture superposed on the uppermost layer and information of a first area corresponding to a transparent area of the second screenshot picture superposed on the uppermost layer in the third screenshot picture; the first area is an area corresponding to the transparent area of the uppermost layer of second screenshot picture at the position of the next layer of second screenshot picture; the information of the first area comprises information of a non-transparent area in the first area;
when M is larger than 2, the information of the first area further comprises information of a second area corresponding to the transparent area of the first area; the second area is an area corresponding to the transparent area of the first area in the position on the second screenshot picture of the next layer; the information of the second area comprises information of a non-transparent area of the second area; and repeating the steps until no transparent area exists in the third screenshot picture, or the second screenshot picture of the next layer is the second screenshot picture superposed on the lowest layer.
15. The electronic device of claim 14, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
determining a starting time of a first time period and a stopping time of the first time period in response to a screen capturing operation of a user;
and carrying out N times of screen capturing on the video in the first time period to obtain N first screen capturing pictures.
16. The electronic device of claim 15, wherein the screen capture operation comprises a first operation and a second operation that are not completed consecutively;
the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform:
determining a starting moment of the first time period in response to the first operation of a user;
determining a stop time of the first period of time in response to the second operation by the user.
17. The electronic device of claim 15 or 16, wherein the screen capture operation comprises a series of operations that are performed in succession;
the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform:
responding to the operation of triggering screen capture by a user, and displaying a time period selection control;
and determining the starting time and the stopping time of the first time period in response to the operation of the user on the time period selection control.
18. The electronic device of any of claims 15-17, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
and carrying out N times of screen capturing on the video within the first time period according to a preset time period to obtain N first screen capturing pictures.
19. The electronic device of any of claims 15-17, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
performing target detection on the video within the first time period;
and when the position change of the target to be processed in the video is determined to exceed a preset distance threshold, performing screen capture once to generate a first screen capture picture until the N first screen capture pictures are obtained.
20. The electronic device of any of claims 14-19, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
and deleting the first screenshot picture which does not meet the picture quality requirement for target detection in the N first screenshot pictures.
21. The electronic device of any of claims 14-20, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
and superposing the M second screen capture pictures according to a preset default sequence to obtain a third screen capture picture.
22. The electronic device of claim 21, wherein the preset default order is an order in which temporally subsequent pictures are superimposed in the video.
23. The electronic device of any of claims 14-20, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
displaying a superposition sequence of the M second screen capture pictures;
and responding to the operation of adjusting the stacking sequence by the user, and stacking the M second screen capturing pictures according to the stacking sequence adjusted by the user to obtain a third screen capturing picture.
24. The electronic device of any of claims 14-23, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
filling a transparent area in the third cut-screen picture with the background color in the third cut-screen picture.
25. The electronic device of any of claims 14-23, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
and filling and completing the transparent area in the third screenshot picture by using an intelligent repairing technology.
26. The electronic device of any of claims 14-25, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
performing semi-transparent processing on an area where a target to be processed is located in a first screenshot picture corresponding to a second screenshot picture superposed on the uppermost layer to obtain a semi-transparent target to be processed;
and superposing the semitransparent target to be processed on the third screen capture picture to obtain a fourth screen capture picture.
27. A chip system for application to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform the method of any of claims 1-13.
28. A computer program product comprising instructions for causing an electronic device to perform the method of any of claims 1-13 when the computer program product is run on the electronic device.
29. A computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-13.
CN202010455200.1A 2020-05-26 2020-05-26 Screen capturing method and electronic equipment Active CN113723397B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010455200.1A CN113723397B (en) 2020-05-26 2020-05-26 Screen capturing method and electronic equipment
PCT/CN2021/094619 WO2021238740A1 (en) 2020-05-26 2021-05-19 Screen capture method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010455200.1A CN113723397B (en) 2020-05-26 2020-05-26 Screen capturing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113723397A true CN113723397A (en) 2021-11-30
CN113723397B CN113723397B (en) 2023-07-25

Family

ID=78671944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010455200.1A Active CN113723397B (en) 2020-05-26 2020-05-26 Screen capturing method and electronic equipment

Country Status (2)

Country Link
CN (1) CN113723397B (en)
WO (1) WO2021238740A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562525B (en) * 2022-03-15 2023-06-13 荣耀终端有限公司 Screen capturing method and device
CN115086774B (en) * 2022-05-31 2024-03-05 北京达佳互联信息技术有限公司 Resource display method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103997616A (en) * 2013-12-20 2014-08-20 三亚中兴软件有限责任公司 Method and device for processing video conference picture, and conference terminal
CN104516644A (en) * 2014-12-09 2015-04-15 广东欧珀移动通信有限公司 Method for free screen capture and terminal
CN106227419A (en) * 2016-07-11 2016-12-14 北京小米移动软件有限公司 Screenshotss method and device
CN106851385A (en) * 2017-02-20 2017-06-13 北京金山安全软件有限公司 Video recording method and device and electronic equipment
CN106970754A (en) * 2017-03-28 2017-07-21 北京小米移动软件有限公司 The method and device of screenshotss processing
CN109005446A (en) * 2018-06-27 2018-12-14 聚好看科技股份有限公司 A kind of screenshotss processing method and processing device, electronic equipment, storage medium
CN109144370A (en) * 2018-09-30 2019-01-04 珠海市君天电子科技有限公司 A kind of screenshotss method, apparatus, terminal and computer-readable medium
CN109525874A (en) * 2018-09-27 2019-03-26 维沃移动通信有限公司 A kind of screenshotss method and terminal device
CN110502117A (en) * 2019-08-26 2019-11-26 三星电子(中国)研发中心 Screenshot method and electric terminal in electric terminal
CN111143015A (en) * 2019-12-31 2020-05-12 维沃移动通信有限公司 Screen capturing method and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103997616A (en) * 2013-12-20 2014-08-20 三亚中兴软件有限责任公司 Method and device for processing video conference picture, and conference terminal
CN104516644A (en) * 2014-12-09 2015-04-15 广东欧珀移动通信有限公司 Method for free screen capture and terminal
CN106227419A (en) * 2016-07-11 2016-12-14 北京小米移动软件有限公司 Screenshotss method and device
CN106851385A (en) * 2017-02-20 2017-06-13 北京金山安全软件有限公司 Video recording method and device and electronic equipment
CN106970754A (en) * 2017-03-28 2017-07-21 北京小米移动软件有限公司 The method and device of screenshotss processing
CN109005446A (en) * 2018-06-27 2018-12-14 聚好看科技股份有限公司 A kind of screenshotss processing method and processing device, electronic equipment, storage medium
CN109525874A (en) * 2018-09-27 2019-03-26 维沃移动通信有限公司 A kind of screenshotss method and terminal device
CN109144370A (en) * 2018-09-30 2019-01-04 珠海市君天电子科技有限公司 A kind of screenshotss method, apparatus, terminal and computer-readable medium
CN110502117A (en) * 2019-08-26 2019-11-26 三星电子(中国)研发中心 Screenshot method and electric terminal in electric terminal
CN111143015A (en) * 2019-12-31 2020-05-12 维沃移动通信有限公司 Screen capturing method and electronic equipment

Also Published As

Publication number Publication date
WO2021238740A1 (en) 2021-12-02
CN113723397B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN112532869B (en) Image display method in shooting scene and electronic equipment
CN113794800B (en) Voice control method and electronic equipment
CN112130742B (en) Full screen display method and device of mobile terminal
CN112231025B (en) UI component display method and electronic equipment
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
CN112887583B (en) Shooting method and electronic equipment
WO2020029306A1 (en) Image capture method and electronic device
CN114390139B (en) Method for presenting video by electronic equipment in incoming call, electronic equipment and storage medium
CN113838490B (en) Video synthesis method and device, electronic equipment and storage medium
CN112383664B (en) Device control method, first terminal device, second terminal device and computer readable storage medium
CN113891009B (en) Exposure adjusting method and related equipment
CN113986070B (en) Quick viewing method for application card and electronic equipment
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN113170037A (en) Method for shooting long exposure image and electronic equipment
CN113837984A (en) Playback abnormality detection method, electronic device, and computer-readable storage medium
CN113641271A (en) Application window management method, terminal device and computer readable storage medium
WO2021238740A1 (en) Screen capture method and electronic device
CN110286975B (en) Display method of foreground elements and electronic equipment
CN113448658A (en) Screen capture processing method, graphical user interface and terminal
CN112449101A (en) Shooting method and electronic equipment
CN112637477A (en) Image processing method and electronic equipment
CN114911400A (en) Method for sharing pictures and electronic equipment
CN114444000A (en) Page layout file generation method and device, electronic equipment and readable storage medium
CN114650330A (en) Method, electronic equipment and system for adding operation sequence
CN115032640B (en) Gesture recognition method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant