CN106406710B - Screen recording method and mobile terminal - Google Patents

Screen recording method and mobile terminal Download PDF

Info

Publication number
CN106406710B
CN106406710B CN201610870935.4A CN201610870935A CN106406710B CN 106406710 B CN106406710 B CN 106406710B CN 201610870935 A CN201610870935 A CN 201610870935A CN 106406710 B CN106406710 B CN 106406710B
Authority
CN
China
Prior art keywords
user
screen
screen recording
recording area
eyeballs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610870935.4A
Other languages
Chinese (zh)
Other versions
CN106406710A (en
Inventor
郑旭增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201610870935.4A priority Critical patent/CN106406710B/en
Publication of CN106406710A publication Critical patent/CN106406710A/en
Application granted granted Critical
Publication of CN106406710B publication Critical patent/CN106406710B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a screen recording method and a mobile terminal. The method for recording the screen comprises the following steps: displaying a screen recording area on a screen of the mobile terminal; updating the position of the screen recording area on the screen, and intercepting an image in the screen recording area according to the position of the screen recording area on the screen; and generating a screen recording file according to the intercepted image. According to the embodiment of the invention, the screen recording area is formed on the screen of the mobile terminal, and only the content displayed in the screen recording area is recorded in the screen recording process, so that the content of the local area of the screen is recorded, the screen recording operation of the local area is simplified, and the use by a user is facilitated. In addition, the display position of the screen recording area on the screen can be adjusted, so that a user can record a required image, the existence of unnecessary information is reduced, and the occupation of a screen recording file on a storage space is reduced.

Description

Screen recording method and mobile terminal
Technical Field
The invention relates to the technical field of communication, in particular to a screen recording method and a mobile terminal.
Background
The software and hardware conditions of modern mobile terminals can realize the functions of screenshot and even recording screen of the device. The following describes a screen recording method and effect of a conventional mobile terminal by taking a mobile phone as an example.
When recording a screen, a user can only start recording the screen by turning on a screen recording function switch of the mobile phone or turning on a third-party application with a screen recording function, and then finish recording the screen through an interactive interface provided by the screen recording function, such as a stop button. The recorded content is stored in a mobile phone storage directory in a video file format.
If the user only wants to record the content of a part of the area in the screen and the recorded area can be moved in real time according to the user's intention, the existing operation method is as follows:
1. and opening a screen recording switch to start recording the screen.
2. And closing the screen recording switch after the screen recording process is finished.
3. And copying the video format file recorded in the mobile phone storage directory to a personal computer.
4. And (3) editing the video by using a tool with a video editing function, such as a meeting sound, reserving the content of a specified partial area of the video, and finally generating an area recording screen.
Firstly, the requirements of tool software with a video editing function on hardware are generally high, so that a mobile phone user needs to store a video file into a high-configuration personal computer and other platforms for video editing; secondly, the use difficulty of video editing software is high, and higher learning cost and even professional knowledge are needed; therefore, two high thresholds for video editing make most common mobile phone users unable to record screens in an area, and the original full-screen recording screen contains too much unnecessary information.
Disclosure of Invention
The embodiment of the invention provides a method for recording a screen, which aims to solve the problem of complex operation when partial area contents in the screen are recorded in the prior art.
In a first aspect, a method for recording a screen is provided, and is applied to a mobile terminal, and includes:
displaying a screen recording area on a screen of the mobile terminal;
updating the position of the screen recording area on the screen, and intercepting an image in the screen recording area according to the position of the screen recording area on the screen;
and generating a screen recording file according to the intercepted image.
In a second aspect, a mobile terminal is provided, including:
the display module is used for displaying a screen recording area on a screen of the mobile terminal;
the processing module is used for updating the position of the screen recording area displayed by the display module on the screen and intercepting an image in the screen recording area according to the position of the screen recording area on the screen;
and the generating module is used for generating a screen recording file according to the image intercepted by the processing module.
Therefore, the screen recording area is formed on the screen of the mobile terminal, and only the content displayed in the screen recording area is recorded in the screen recording process, so that the content of the local area of the screen is recorded, the screen recording operation of the local area is simplified, and the use by a user is facilitated. In addition, the display position of the screen recording area on the screen can be adjusted, so that a user can record a required image, the existence of unnecessary information is reduced, and the occupation of a screen recording file on a storage space is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart illustrating a method for recording a screen according to a first embodiment of the present invention;
fig. 2 is a schematic diagram of a screen recording area according to a first embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a control of movement of a screen recording area according to a first embodiment of the present invention;
fig. 4 is a flowchart illustrating a method of recording a screen according to a second embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a displacement of an eyeball position coordinate according to a second embodiment of the invention;
fig. 6 is a schematic diagram illustrating a displacement of a screen recording area according to a second embodiment of the invention;
fig. 7 is a flowchart illustrating a method of recording a screen according to a third embodiment of the present invention;
fig. 8 is a block diagram of a mobile terminal according to a fourth embodiment of the present invention;
fig. 9 is a second block diagram of a mobile terminal according to a fourth embodiment of the present invention;
fig. 10 is a block diagram of a mobile terminal according to a fifth embodiment of the present invention;
fig. 11 is a block diagram of a mobile terminal according to a sixth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
First embodiment
The embodiment of the invention provides a method for recording a screen, which is applied to a mobile terminal. As shown in fig. 1, the method for recording a screen includes:
step 101, displaying a screen recording area on a screen of the mobile terminal.
In the embodiment of the invention, before the screen recording is started, a screen recording area is displayed on the screen of the mobile terminal, and the area of the screen recording area is smaller than or equal to the area of the screen. In order to highlight the screen recording area, the screen recording area can be set to be a semitransparent area, or a display area outside the screen recording area is set to be in a semitransparent state, or a frame is added to the screen recording area in order to conveniently view the content displayed in the screen and the screen recording area, and the display area outside the frame is set to be in a transparent state, so that the operation of a user on the screen recording area can be facilitated.
And 102, updating the position of the screen recording area on the screen, and capturing the image in the screen recording area according to the position of the screen recording area on the screen.
In the embodiment of the invention, the position of the screen recording area on the screen can be adjusted so as to record the content required by the user, and in the recording process, only the image displayed in the screen recording area is obtained, and the image displayed outside the screen recording area is not recorded.
And 103, generating a screen recording file according to the intercepted image.
Since the image recorded in the screen recording process is the content displayed by the screen in the screen recording area, in this step, the image in the screen recording area captured in step 102 is processed, and the screen recording file can be generated. And when the area of the screen recording area is smaller than that of the screen, the generated screen recording file is a local area screen recording video.
Further, the size, shape, and position of the initially displayed screen recording area are generally default settings of the system, as shown in fig. 2, a screen recording interface is displayed on the screen of the mobile terminal in the figure, and the screen recording area 201 is located in the middle of the screen recording interface and is rectangular. Of course, the user can also customize the size, shape and position of the initially displayed screen recording area according to actual requirements. In the screen recording process, the size, the shape and the position of the screen recording area can be adjusted according to actual requirements, and fig. 3 is a schematic diagram for adjusting the position of the screen recording area 201, so that a user can conveniently adjust the content to be recorded, recording pleasure can be increased, and the use experience of the user is improved.
Further, one way to update the location of the screen recording area is: the user adjusts the position of the screen recording area through dragging operation, and the specific implementation method comprises the following steps: when the dragging operation of the screen recording area is detected, the displacement of the dragging operation is obtained, the moving direction and the moving distance of the screen recording area are calculated according to the displacement, and the position of the screen recording area is updated according to the moving direction and the moving distance of the screen recording area.
Of course, it is understood that other realizations may be used, such as adjusting the position of the screen recording area in the screen recording process through voice commands, gravity control, and the like.
In summary, the embodiment of the present invention forms a screen recording area on the screen of the mobile terminal, and only records the content displayed in the screen recording area during the screen recording process, thereby implementing screen recording in the local area, simplifying the screen recording operation in the local area, and facilitating the use of the user. In addition, the user can only record the required image by adjusting the size, the shape and the position of the screen recording area according to the self requirement, the existence of unnecessary information is reduced, the occupation of the screen recording file on the storage space is reduced, and the use experience of the user is improved.
Second embodiment
The embodiment of the invention provides a method for recording a screen, which is applied to a mobile terminal. As shown in fig. 4, the method for recording a screen includes:
step 401, receiving a start request of a screen recording function.
In the embodiment of the present invention, the request for starting the screen recording function may include: at least one of a physical key triggering instruction, a touch gesture operation instruction, a voice triggering instruction, a fingerprint triggering instruction and a pressing triggering instruction. Specifically, the manner in which the instruction is triggered may be one or more physical key triggers, key time interval triggers, touch screen single-point or multi-point operation triggers, infrared sensor triggers, acceleration sensor triggers, gyroscope sensor triggers, temperature sensor triggers, fingerprint recognition triggers, voice recognition triggers, gesture recognition triggers, or image recognition triggers.
Of course, the above triggering manner is only an example, and any other manner capable of triggering the terminal camera may be applied to the embodiments of the present invention, which is not necessarily described herein.
Step 402, displaying a screen recording area on a screen of the mobile terminal according to the received starting request.
In the embodiment of the invention, before the screen recording is started, a screen recording area is displayed on the screen of the mobile terminal, and the area of the screen recording area is smaller than or equal to the area of the screen. In order to highlight the screen recording area, the screen recording area can be set to be a semitransparent area, or a display area outside the screen recording area is set to be in a semitransparent state, or a frame is added to the screen recording area in order to conveniently view the content displayed in the screen and the screen recording area, and the display area outside the frame is set to be in a transparent state, so that the operation of a user on the screen recording area can be facilitated.
And step 403, acquiring the moving direction and the moving amplitude of the eyeballs of the user.
According to the embodiment of the invention, the screen recording area can be controlled to move on the screen according to the movement change of the eyeballs of the user, so that the content displayed in the screen area concerned by the sight of the user can be recorded in real time. In the implementation process, firstly, the motion change of the user eyeballs (namely the irises) in the screen recording process needs to be detected in real time through a face recognition technology, and the moving direction and the moving amplitude of the user eyeballs are obtained.
And step 404, calculating the moving direction and the moving distance of the screen recording area according to the moving direction and the moving amplitude of the eyeballs of the user.
After the moving direction and the moving amplitude of the user eyeballs in the screen recording process are obtained, the moving direction and the moving distance of the screen recording area can be calculated according to the moving direction and the moving amplitude of the user eyeballs, namely the moving direction and the moving amplitude of the user eyeballs are converted into the moving direction and the moving amplitude of the screen recording area, and the specific implementation process is as follows:
the method comprises the steps of firstly acquiring a first user image through a front camera of a mobile terminal, then acquiring the moving direction and the moving amplitude of eyeballs of a user in the first user image, then determining the displacement of the eyeballs of the user according to the moving direction and the moving amplitude of the eyeballs of the user in the first user image (as shown in figure 5), calculating the distance from the eyeballs of the user to a screen according to the proportion of the size of the first user image to the size of a face of the user in the first user image, and finally calculating the moving direction and the moving distance of a screen recording area according to the displacement of the eyeballs of the user and the distance from the eyeballs of the user to the screen.
The conversion process may be implemented by a mathematical method such as a scaling factor k, and the moving direction and the moving amplitude of the screen recording area are calculated by the scaling factor k, and the final calculation result is the offset of the x coordinate and the y coordinate of the screen recording area, dx is kx, dy is ky, as shown in fig. 6. The determining factor of the scaling factor k is determined by the distance s from the eyeball of the user to the screen, for example, k is t s, where t can be obtained through experiments.
In order to improve the accuracy of the calculation result, the moving direction and the moving amplitude of the eyeballs of the two eyes can be averaged, and the displacement of the eyeballs of the user can be determined according to the average value.
Among these, it is understood that: the distance from eyeballs of the user to the screen can be calculated through infrared sensors or sonar sensors on the mobile terminal, and the specific situation can be selected according to actual requirements.
Further, in the embodiment of the present invention, a specific implementation process of acquiring the moving direction and the moving amplitude of the eyeballs of the user in the first user image is as follows: the method comprises the steps of acquiring a second user image through a front camera of a mobile terminal, then acquiring a position coordinate of eyeballs of a user in the second user image and a position coordinate of the eyeballs of the user in a first user image, and finally acquiring a moving direction and a moving amplitude of the eyeballs of the user in the first user image according to the position coordinate of the eyeballs of the user in the first user image and the position coordinate of the eyeballs of the user in the second user image, namely converting the moving direction and the moving amplitude into a change of a position coordinate of a screen recording area according to a change amount of the position coordinates of the eyeballs of the user before and after the position coordinate of the eyeballs of the user, so that the movement of the screen recording area is controlled.
And step 405, updating the position of the screen recording area according to the moving direction and the moving distance of the screen recording area.
After the moving direction and the moving distance of the screen recording area are obtained, the position of the screen recording area can be updated according to the moving direction and the moving distance of the screen recording area, and the position of the screen recording area can be adjusted.
And 406, capturing an image in the screen recording area according to the position of the screen recording area on the screen.
In the embodiment of the invention, in the recording process, only the image displayed in the screen recording area is acquired, and the image displayed outside the screen recording area is not recorded, so that after the position of the screen recording area is determined, the image in the screen recording area is captured and stored. The mode is different from a full screen recording technology in the prior art, so that the recording of excessive unnecessary information can be reduced, the occupation of a screen recording file on a storage space is reduced, and the use experience of a user is improved.
When the image in the screen recording area is intercepted, the full-screen image can be obtained according to the preset frame rate, and then the image displayed in the screen recording area by each frame of the full-screen image is intercepted and stored. The preset frame rate may be user-defined or a default value of the system, and a maximum value of the preset frame rate depends on the performance of the mobile terminal.
And step 407, generating a screen recording file according to the intercepted image.
In the step, the intercepted image in the screen recording area is processed, and a screen recording file can be generated. And when the area of the screen recording area is smaller than that of the screen, the generated screen recording file is a local area screen recording video.
Furthermore, in the embodiment of the present invention, in addition to controlling the movement of the screen recording area according to the movement change of the eyeball of the user, the size and the shape of the screen recording area may be controlled according to the operation of the user, for example, the user controls the screen recording area to enlarge or reduce by double-clicking the screen recording area, or controls the screen recording area to enlarge or reduce by detecting the change of the distance between two touch points on the screen, or changes the shape and/or the area of the screen recording area by dragging one corner of the screen recording area, so that the user can record the required content conveniently, the recording of unnecessary information is reduced, the recording fun of the user is increased, and the viewing fun of the generated screen recording file is increased.
Further, the triggering of the screen recording starting action can be realized through a specific gesture, such as double-click, long-press, re-pressing of the screen recording area and the like.
In summary, in the embodiments of the present invention, a screen recording area is formed on a screen of a mobile terminal, and only contents displayed in the screen recording area are recorded in the screen recording process, so that screen recording in a local area is achieved, operation of screen recording in the local area is simplified, and the screen recording area is convenient for a user to use.
Third embodiment
The embodiment of the invention provides a method for recording a screen, which is applied to a mobile terminal. As shown in fig. 7, the method of recording a screen includes:
step 701, receiving a start request of a screen recording function.
In the embodiment of the present invention, the request for starting the screen recording function may include: at least one of a physical key triggering instruction, a touch gesture operation instruction, a voice triggering instruction, a fingerprint triggering instruction and a pressing triggering instruction. Specifically, the manner in which the instruction is triggered may be one or more physical key triggers, key time interval triggers, touch screen single-point or multi-point operation triggers, infrared sensor triggers, acceleration sensor triggers, gyroscope sensor triggers, temperature sensor triggers, fingerprint recognition triggers, voice recognition triggers, gesture recognition triggers, or image recognition triggers.
Of course, the above triggering manner is only an example, and any other manner capable of triggering the terminal camera may be applied to the embodiments of the present invention, which is not necessarily described herein.
Step 702, displaying a screen recording area on a screen of the mobile terminal according to the received starting request.
In the embodiment of the invention, before the screen recording is started, a screen recording area is displayed on the screen of the mobile terminal, and the area of the screen recording area is smaller than or equal to the area of the screen. In order to highlight the screen recording area, the screen recording area can be set to be a semitransparent area, or a display area outside the screen recording area is set to be in a semitransparent state, or a frame is added to the screen recording area in order to conveniently view the content displayed in the screen and the screen recording area, and the display area outside the frame is set to be in a transparent state, so that the operation of a user on the screen recording area can be facilitated.
In the embodiment of the present invention, after the screen recording function is started, a screen recording object needs to be determined first, and then a screen recording area is formed and displayed on the screen of the mobile terminal according to the screen recording object, and the specific implementation method thereof is as follows:
the method comprises the steps of recognizing at least one pattern with preset characteristics from an image displayed on a screen as a recognition object, identifying the recognition object in the image displayed on the screen, determining the recognition object selected by the selection operation as a screen recording object when the selection operation of a user on the identified recognition object is detected, and forming and displaying a screen recording area on the screen of the mobile terminal according to the determined screen recording object.
After the screen recording object is selected, the mobile terminal can automatically form a screen recording area according to the selected screen recording object, and adjust the size and the position of the screen recording area, so that unnecessary information recording is reduced while all screen recording objects are ensured to be in the screen recording area. In order to make the position of the automatic adjustment of the screen recording area more accurate, more than one picture can be used for automatic image identification, for example, the mobile terminal can identify the front, side and back of the screen recording object through a plurality of pictures.
Wherein, the graph with the preset characteristics includes but is not limited to: people, animals and other objects (such as automobiles), and the like, and the specific situation can be set according to the actual requirement.
The identification process may be to identify a pattern with a preset feature in a static image displayed on a screen, or to identify a pattern with a preset feature in a dynamic image displayed on a screen. After the pattern with the preset features is recognized, the pattern is identified, such as by framing the recognition object with a square frame. A screen capture object, which may be one or more in number, is then selected by the user from the identified recognition objects. If the content to be recorded is a dynamic video, the dynamic video can be in a pause state, and then a graph with preset characteristics in a static image is identified and identified; the dynamic video can also be played for a preset time period, the graphs with preset characteristics appearing in the playing time period are identified, after the playing time is over, all the graphs with preset characteristics identified in the playing process are sorted and displayed to the user, so that the user can select a screen recording object; and the method can also identify the graph with the preset characteristics in the normal playing process of the dynamic video, identify the graph, and form a screen recording area after a user selects a screen recording object.
And step 703, acquiring the position coordinates of the screen recording object corresponding to the screen recording area.
And after the screen recording area is formed, acquiring the position coordinates of the screen recording object in real time so that the screen recording area can track the screen recording object in time to record the screen recording object.
And step 704, updating the position of the screen recording area according to the position coordinates of the screen recording object.
In the screen recording process, because the screen recording object may move in position and change in posture, the position coordinates of the screen recording object need to be detected in real time so as to adjust the state of the screen recording area, so that all screen recording objects are in the screen recording area.
When the position of the screen recording area is updated according to the position coordinates of the screen recording object, at least one pattern with preset characteristics can be recognized from an image displayed on a screen as a recognition object, then whether the recognition object matched with the screen recording object exists or not is judged, and when the recognition object matched with the screen recording object exists, the position of the screen recording area is updated according to the position coordinates of the recognition object. For example, the human face can determine whether the human face is a screen recording object by matching the shape of the human face and the position proportion of five sense organs, and the automobile is matched with the shape, the color and the like.
Further, if it is determined that there is no screen recording object in the image displayed on one frame of screen, counting the displayed image without the screen recording object, and when the continuous count value of the displayed image without the screen recording object is greater than or equal to the preset count value, turning off the screen recording function. For example, if the preset count value is 3600, if 60 frames can be recorded in one second, the screen recording process is exited when there is no screen recording object captured by the continuous 3600 frames, that is, when no screen recording object appears on the screen within 60 seconds.
Step 705, capturing an image in the screen recording area according to the position of the screen recording area on the screen.
In the embodiment of the invention, in the recording process, only the image displayed in the screen recording area is acquired, and the image displayed outside the screen recording area is not recorded, so that after the position of the screen recording area is determined, the image in the screen recording area is captured and stored. The mode is different from a full screen recording technology in the prior art, so that the recording of excessive unnecessary information can be reduced, the occupation of a screen recording file on a storage space is reduced, and the use experience of a user is improved.
When the image in the screen recording area is intercepted, the full-screen image can be obtained according to the preset frame rate, and then the image displayed in the screen recording area by each frame of the full-screen image is intercepted and stored. The preset frame rate may be user-defined or a default value of the system, and a maximum value of the preset frame rate depends on the performance of the mobile terminal.
And step 706, generating a screen recording file according to the intercepted image.
In the step, the intercepted image in the screen recording area is processed, and a screen recording file can be generated. And when the area of the screen recording area is smaller than that of the screen, the generated screen recording file is a local area screen recording video.
Furthermore, in the embodiment of the present invention, in addition to adjusting the position of the screen recording area in real time according to the position movement of the screen recording object, so that all screen recording objects are in the screen recording area, the size and shape of the screen recording area can be controlled according to the state change of the screen recording object, so as to achieve a better recording effect.
Further, the triggering of the screen recording starting action can be realized through a specific gesture, such as double-click, long-press, re-pressing of the screen recording area and the like.
In summary, in the embodiments of the present invention, a screen recording area is formed on a screen of a mobile terminal, and only contents displayed in the screen recording area are recorded in the screen recording process, so that screen recording in a local area is achieved, operation of screen recording in the local area is simplified, and convenience is brought to a user.
Fourth embodiment
The embodiment of the present invention provides a mobile terminal 800, which can implement the details of the screen recording methods described in the first to third embodiments, and achieve the same effect. As shown in fig. 8, the mobile terminal 800 includes:
a display module 801, configured to display a screen recording area on a screen of the mobile terminal.
In the embodiment of the present invention, before starting screen recording, a screen recording area is displayed on the screen of the mobile terminal through the display module 801, and the area of the screen recording area is smaller than or equal to the area of the screen. In order to highlight the screen recording area, the screen recording area can be set to be a semitransparent area, or a display area outside the screen recording area is set to be in a semitransparent state, or a frame is added to the screen recording area in order to conveniently view the content displayed in the screen and the screen recording area, and the display area outside the frame is set to be in a transparent state, so that the operation of a user on the screen recording area can be facilitated.
The processing module 802 is configured to update the position of the screen recording area displayed by the display module 801 on the screen, and capture an image in the screen recording area according to the position of the screen recording area on the screen.
In the embodiment of the present invention, the processing module 802 may adjust the position of the screen recording area on the screen, so as to record the content required by the user, and in the recording process, only the image displayed in the screen recording area is obtained, and the image displayed outside the screen recording area is not recorded.
The generating module 803 is configured to generate a screen recording file according to the image captured by the processing module 802.
Since the image recorded in the screen recording process is the content displayed in the screen recording area of the screen, the generating module 803 processes the image in the screen recording area captured by the processing module 802, and then the screen recording file can be generated. And when the area of the screen recording area is smaller than that of the screen, the generated screen recording file is a local area screen recording video.
Further, as shown in fig. 9, the processing module 802 includes: a first processing sub-module (not shown), a second processing sub-module 8022, or a third processing sub-module (not shown).
The first processing submodule is used for acquiring the displacement of dragging operation when the dragging operation of the screen recording area is detected, calculating the moving direction and the moving distance of the screen recording area according to the displacement, and updating the position of the screen recording area according to the moving direction and the moving distance of the screen recording area.
In the embodiment of the invention, the user can adjust the position of the screen recording area through dragging operation, and the mode is simple and flexible to operate.
The second processing sub-module 8022 is configured to obtain a moving direction and a moving amplitude of the user's eyeball, calculate a moving direction and a moving distance of the screen recording area according to the moving direction and the moving amplitude of the user's eyeball, and update the position of the screen recording area according to the moving direction and the moving distance of the screen recording area.
In the embodiment of the invention, the screen recording area can be controlled to move on the screen according to the movement change of the eyeballs of the user, so that the content displayed in the screen area concerned by the sight of the user can be recorded in real time.
And the third processing submodule is used for acquiring the position coordinates of the screen recording object corresponding to the screen recording area and updating the position of the screen recording area according to the position coordinates of the screen recording object.
In the embodiment of the invention, the position of the screen recording area can be automatically adjusted by tracking the screen recording object, so that the requirement of automatically recording a specific object in the screen is met, and the recording process is more intelligent.
Further, according to the movement change of the user's eyeball, the screen recording area is controlled to move on the screen, that is, the moving direction and moving amplitude of the user's eyeball are converted into the moving direction and moving amplitude of the screen recording area, the specific method is the function implemented by the following devices 80221 to 80225, as shown in fig. 9, the second processing sub-module 8022 includes:
the collecting unit 80221 is configured to collect a first user image through the front camera.
The first user image described herein is an image having a user, particularly a face of the user. The image is an image acquired in a screen recording process.
An acquiring unit 80222, configured to acquire a moving direction and a moving amplitude of an eyeball of the user in the first user image acquired by the acquiring unit 80221.
If it is desired to control the movement of the screen recording region through the movement change of the user's eyeball, first, the moving direction and the moving amplitude of the user's eyeball in the first user image need to be obtained through the obtaining unit 80222.
In order to improve the accuracy of the calculation result, the moving direction and the moving amplitude of the eyeballs of the two eyes can be averaged, and the displacement of the eyeballs of the user can be determined according to the average value.
A determining unit 80223, configured to determine a displacement amount of the user's eyeball according to the moving direction and the moving amplitude of the user's eyeball in the first user image acquired by the acquiring unit 80221.
After the acquiring unit 80222 acquires the moving direction and the moving amplitude of the user's eyeball in the first user image, the determination unit 80223 determines the displacement amount of the user's eyeball according to the moving direction and the moving amplitude of the user's eyeball in the first user image.
The first calculating unit 80224 is configured to calculate a distance from an eyeball of the user to the screen according to a ratio of the size of the first user image acquired by the acquiring unit 80221 to the size of the face of the user in the first user image.
In order to implement the conversion process, when the acquisition unit 80221 acquires the first user image, the distance between the eyeballs of the user and the screen needs to be calculated, so as to convert the displacement of the eyeballs of the user into the displacement of the screen recording area, thereby controlling the movement of the screen recording area.
A second calculating unit 80225, configured to calculate a moving direction and a moving distance of the screen recording area according to the displacement amount of the user's eyeball determined by the determining unit 80223 and the distance from the user's eyeball to the screen calculated by the first calculating unit 80224.
And finally, according to the displacement of the eyeballs of the user and the distance from the eyeballs of the user to the screen, the moving direction and the moving distance corresponding to the screen recording area, which are the certain movement changes of the eyeballs of the user, can be calculated.
Further, before calculating the moving direction and the moving distance of the screen recording area according to the moving direction and the moving amplitude of the user's eyeball, the moving direction and the moving amplitude of the user's eyeball in the first user image need to be obtained, and the specific method is as the functions implemented by the devices 804 to 805 and 802121 to 802122, as shown in fig. 9, the mobile terminal 800 further includes:
and the acquisition module 804 is configured to acquire a second user image through the front camera.
The first user image described herein is an image having a user, particularly a face of the user. The image is a previous frame image of the acquired first user image.
An obtaining module 805, configured to obtain position coordinates of the user's eyeball in the second user image acquired by the acquiring module 804.
After the acquisition module 804 acquires the second user image, the acquisition module 805 determines the position coordinates of the user's eyeball in the second user image.
Wherein, the obtaining unit 80222 includes:
a first acquiring subunit 802221, configured to acquire position coordinates of the user's eyeball in the first user image acquired by the acquisition unit 80221.
After the acquisition unit 80221 acquires the first user image, the position coordinates of the user's eyeball in the first user image are acquired by the first acquisition sub-unit 802221.
The second obtaining sub-unit 802222 is configured to obtain a moving direction and a moving amplitude of the user's eyeball in the first user image according to the position coordinates of the user's eyeball in the first user image obtained by the obtaining module 805 and the position coordinates of the user's eyeball in the second user image obtained by the first obtaining sub-unit 802221.
Finally, the second obtaining sub-unit 802222 calculates the moving direction and moving amplitude of the user's eyeball in the first user image, that is, the variation of the position coordinates of the user's eyeball before and after the movement direction and moving amplitude in the first user image, according to the position coordinates of the user's eyeball in the first user image and the position coordinates of the user's eyeball in the second user image, and converts the variation into the variation of the position coordinates of the screen recording area, thereby controlling the movement of the screen recording area.
Further, in the embodiment of the present invention, after the screen recording function is started, it is necessary to first determine a screen recording object, and then form and display a screen recording area on a screen of the mobile terminal according to the screen recording object, where an implementation method of determining the screen recording object is as the functions implemented by the devices 806 to 808, and as shown in fig. 9, the mobile terminal further includes:
and the identifying module 806 is used for identifying at least one pattern with preset characteristics from the image displayed on the screen as an identification object.
Wherein, the graph with the preset characteristics includes but is not limited to: people, animals and other objects (such as automobiles), and the like, and the specific situation can be set according to the actual requirement.
The identification process may be to identify a pattern with a preset feature in a static image displayed on a screen, or to identify a pattern with a preset feature in a dynamic image displayed on a screen. If the content to be recorded is a dynamic video, the dynamic video can be in a pause state, and then a graph with preset characteristics in a static image is identified; the dynamic video can also be played for a preset time period, the graphs with preset characteristics appearing in the playing time period are identified, and after the playing time is over, all the graphs with preset characteristics identified in the playing process are sorted and displayed to the user; the method can also identify the graphs with preset characteristics in the normal playing process of the dynamic video, and the specific situation can be designed according to actual requirements.
An identification module 807 for identifying the recognition object recognized by the identification module 807 in the image displayed on the screen.
After the recognition module 806 recognizes the pattern with the predetermined characteristic, it is identified by the identification module 807, such as framing the recognition object by a square frame.
A determining module 808, configured to, when a selection operation of the identified identification object by the user is detected, determine the identification object selected by the selection operation as a screen recording object.
After the identification module 807 identifies the identification objects, a screen recording object is selected by the user from the identified identification objects, and the number of the screen recording objects may be one or more.
Further, the third processing sub-module includes:
and the recognition unit is used for recognizing at least one pattern with preset characteristics from the image displayed on the screen as a recognition object.
And the processing unit is used for judging whether an identification object matched with the screen recording object exists or not, and updating the position of the screen recording area according to the position coordinate of the identification object when the identification object matched with the screen recording object exists.
In the embodiment of the invention, when a recording object is recorded, in order to track the recording object in real time, in the recording process, a graph with preset characteristics in an image displayed on a screen can be firstly identified, then the identified graph is subjected to characteristic matching with the screen recording object selected by a user so as to determine whether the identified graph is the screen recording object, when the similarity value of the two is greater than or equal to the preset similarity value, the identified graph is determined to be the screen recording object, and the position of a screen recording area is updated according to the position coordinate of the identified graph so as to track the screen recording object. The preset similarity is a percentage with a larger value, such as 70%, and the selection of a specific value can be determined according to actual requirements.
In summary, in the mobile terminal provided in the embodiment of the present invention, the display module 801 displays the screen recording area on the screen of the mobile terminal, the processing module 802 updates the position of the screen recording area, performs the screen recording operation according to the position of the screen recording area, and the generating module 803 generates the screen recording file from the recorded content. In the screen recording process, only the content displayed in the screen recording area is recorded, so that screen recording in a local area is realized, operation of screen recording in the local area is simplified, convenience is brought to users, the users can record required images only by adjusting the size, shape and position of the screen recording area according to self requirements, the existence of less unnecessary information is reduced, the occupation of screen recording files on storage space is reduced, and the use experience of the users is improved.
Fifth embodiment
Fig. 11 is a block diagram of a mobile terminal according to another embodiment of the present invention. The mobile terminal 1000 shown in fig. 10 includes: at least one processor 1001, memory 1002, at least one network interface 1004, and a user interface 1003. Wherein, this mobile terminal still includes: the front camera. Various components in mobile terminal 1000 are coupled together by a bus system 1005. It is understood that bus system 1005 is used to enable communications among the components connected. The bus system 1005 includes a power bus, a control bus, and a status signal bus, in addition to a data bus. But for the sake of clarity the various busses are labeled in figure 10 as the bus system 1005.
The user interface 1003 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It is to be understood that the memory 1002 in embodiments of the present invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 1002 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 1002 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 10021 and applications 10022.
The operating system 10021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application 10022 includes various applications, such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services. The program implementing the method according to the embodiment of the present invention may be included in the application program 10022.
In the embodiment of the present invention, by calling a program or an instruction stored in the memory 1002, specifically, a program or an instruction stored in the application program 10022, the processor 1001 is configured to display a screen recording area on a screen of the mobile terminal, and when the position of the screen recording area on the screen is updated, capture an image in the screen recording area according to the position of the screen recording area on the screen, and generate a screen recording file according to the captured image.
The method disclosed by the embodiment of the invention can be applied to the processor 1001 or can be implemented by the processor 1001. The processor 1001 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 1001. The Processor 1001 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1002, and the processor 1001 reads the information in the memory 1002 and performs the steps of the method in combination with the hardware.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, the processor 1001 is further configured to: when the user interface 1003 detects a dragging operation on the screen recording area, acquiring a displacement amount of the dragging operation, calculating a moving direction and a moving distance of the screen recording area according to the displacement amount, and updating a position of the screen recording area according to the moving direction and the moving distance of the screen recording area;
or acquiring the moving direction and the moving amplitude of the eyeballs of the user, calculating the moving direction and the moving distance of the screen recording area according to the moving direction and the moving amplitude of the eyeballs of the user, and updating the position of the screen recording area according to the moving direction and the moving distance of the screen recording area;
or acquiring the position coordinates of the screen recording object corresponding to the screen recording area, and updating the position of the screen recording area according to the position coordinates of the screen recording object.
Optionally, the processor 1001 is further configured to: the method comprises the steps of acquiring a first user image through a front camera, acquiring the moving direction and the moving amplitude of eyeballs of a user in the first user image, determining the displacement of the eyeballs of the user according to the moving direction and the moving amplitude of the eyeballs of the user in the first user image, calculating the distance from the eyeballs of the user to a screen according to the ratio of the size of the first user image to the size of a face of the user in the first user image, and calculating the moving direction and the moving distance of a screen recording area according to the displacement of the eyeballs of the user and the distance from the eyeballs of the user to the screen.
Optionally, the processor 1001 is further configured to: the method comprises the steps of acquiring a second user image through a front camera, acquiring a position coordinate of eyeballs of a user in the second user image, acquiring a position coordinate of the eyeballs of the user in a first user image, and acquiring a moving direction and a moving amplitude of the eyeballs of the user in the first user image according to the position coordinate of the eyeballs of the user in the first user image and the position coordinate of the eyeballs of the user in the second user image.
Optionally, the processor 1001 is further configured to: recognizing at least one figure with preset characteristics as a recognition object from an image displayed on a screen, identifying the recognition object in the image displayed on the screen, and determining the recognition object selected by the selection operation as a screen recording object when the selection operation of the identified recognition object by a user is detected.
Optionally, the processor 1001 is further configured to: recognizing at least one pattern with preset characteristics from the image displayed on the screen as a recognition object, judging whether the recognition object matched with the screen recording object exists or not, and updating the position of the screen recording area according to the position coordinates of the recognition object when the recognition object matched with the screen recording object exists.
The mobile terminal 1000 can implement the processes implemented by the mobile terminal in the foregoing embodiments, and details are not repeated here to avoid repetition.
The mobile terminal 1000 according to the embodiment of the present invention forms a screen recording area on the screen of the mobile terminal, and only records the content displayed in the screen recording area during the screen recording process, thereby implementing screen recording in a local area, simplifying the screen recording operation in the local area, facilitating the use of the user, and enabling the user to perform the screen recording operation in the local area according to the user's own needs,
by adjusting the position of the screen recording area, only required images are recorded, the existence of unnecessary information is reduced, the occupation of screen recording files on storage space is reduced, and the use experience of users is improved.
Sixth embodiment
Fig. 11 is a schematic structural diagram of a mobile terminal according to another embodiment of the present invention. Specifically, the mobile terminal 1100 in fig. 11 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a vehicle-mounted computer, or the like.
The mobile terminal 1100 in fig. 11 includes a Radio Frequency (RF) circuit 1101, a memory 1102, an input unit 1103, a display unit 1104, a processor 1106, an audio circuit 1107, a wifi (wireless fidelity) module 1108, a power supply 1109, and a photographing component 1110.
The photographing component 1110 includes a front camera and a rear camera of the mobile terminal.
The input unit 1103 may be used, among other things, to receive numeric or character information input by a user and to generate signal inputs related to user settings and function control of the mobile terminal 1100. Specifically, in the embodiment of the present invention, the input unit 1103 may include a touch panel 11031. The touch panel 11031, also referred to as a touch screen, may collect touch operations of a user (e.g., operations of the user on the touch panel 11031 using any suitable object or accessory such as a finger, a stylus, etc.) thereon or nearby, and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 11031 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 1106, where it can receive and execute commands from the processor 1106. In addition, the touch panel 11031 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 11031, the input unit 1103 may also include other input devices 11032, and the other input devices 11032 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Among other things, the display unit 1104 may be used to display information input by or provided to the user and various menu interfaces of the mobile terminal 1100. The display unit 1104 may include a display panel 11041, and optionally, the display panel 11041 may be configured in the form of an LCD or an Organic Light-Emitting Diode (OLED), or the like.
It should be noted that the touch panel 11031 may overlay the display panel 11041 to form a touch display screen, and when the touch display screen detects a touch operation thereon or nearby, the touch display screen is transmitted to the processor 1106 to determine the type of the touch event, and then the processor 1106 provides a corresponding visual output on the touch display screen according to the type of the touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
The processor 1106 is a control center of the mobile terminal 1100, connects various parts of the entire mobile phone using various interfaces and lines, and executes various functions of the mobile terminal 1100 and processes data by operating or executing software programs and/or modules stored in the first memory 11021 and calling data stored in the second memory 11022, thereby integrally monitoring the mobile terminal 1100. Optionally, processor 1106 may include one or more processing units.
In the embodiment of the present invention, the processor 1106 is configured to display the screen recording area on the screen of the mobile terminal by calling the software program and/or the module stored in the first memory 11021 and/or the data stored in the second memory 11022, and when the position of the screen recording area on the screen is updated, capture an image in the screen recording area according to the position of the screen recording area on the screen, and generate a screen recording file according to the captured image.
Optionally, the processor 1106 is further configured to: when the input unit 1103 detects a dragging operation on the screen recording area, acquiring a displacement amount of the dragging operation, calculating a moving direction and a moving distance of the screen recording area according to the displacement amount, and updating a position of the screen recording area according to the moving direction and the moving distance of the screen recording area;
or acquiring the moving direction and the moving amplitude of the eyeballs of the user, calculating the moving direction and the moving distance of the screen recording area according to the moving direction and the moving amplitude of the eyeballs of the user, and updating the position of the screen recording area according to the moving direction and the moving distance of the screen recording area;
or acquiring the position coordinates of the screen recording object corresponding to the screen recording area, and updating the position of the screen recording area according to the position coordinates of the screen recording object.
Optionally, the processor 1106 is further configured to: the method comprises the steps of collecting a first user image through a front camera of a photographing component 1110, obtaining the moving direction and the moving amplitude of eyeballs of a user in the first user image, determining the displacement of the eyeballs of the user according to the moving direction and the moving amplitude of the eyeballs of the user in the first user image, calculating the distance from the eyeballs of the user to a screen according to the proportion of the size of the first user image to the size of a face of the user in the first user image, and calculating the moving direction and the moving distance of a screen recording area according to the displacement of the eyeballs of the user and the distance from the eyeballs of the user to the screen.
Optionally, the processor 1106 is further configured to: the photographing assembly 1110 is used for collecting a second user image through the front camera, obtaining the position coordinates of the eyeballs of the user in the second user image, obtaining the position coordinates of the eyeballs of the user in the first user image, and obtaining the moving direction and the moving amplitude of the eyeballs of the user in the first user image according to the position coordinates of the eyeballs of the user in the first user image and the position coordinates of the eyeballs of the user in the second user image.
Optionally, the processor 1106 is further configured to: recognizing at least one figure with preset characteristics as a recognition object from an image displayed on a screen, identifying the recognition object in the image displayed on the screen, and determining the recognition object selected by the selection operation as a screen recording object when the selection operation of the identified recognition object by a user is detected.
Optionally, the processor 1106 is further configured to: recognizing at least one pattern with preset characteristics from the image displayed on the screen as a recognition object, judging whether the recognition object matched with the screen recording object exists or not, and updating the position of the screen recording area according to the position coordinates of the recognition object when the recognition object matched with the screen recording object exists.
Therefore, according to the mobile terminal 1100 provided by the embodiment of the present invention, a screen recording area is formed on the screen of the mobile terminal, and only the content displayed in the screen recording area is recorded in the screen recording process, so that screen recording in a local area is realized, the screen recording operation in the local area is simplified, and the user can use the screen recording area conveniently.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A method for recording a screen is applied to a mobile terminal, and is characterized by comprising the following steps:
displaying a screen recording area on a screen of the mobile terminal, wherein the area of the screen recording area is smaller than that of the screen;
updating the position of the screen recording area on the screen according to the movement change of eyeballs of a user, and capturing an image in the screen recording area according to the position of the screen recording area on the screen;
generating a screen recording file according to the intercepted image;
the updating the position of the screen recording area on the screen comprises:
the method comprises the steps of obtaining the moving direction and the moving amplitude of eyeballs of a user, calculating the moving direction and the moving distance of a screen recording area according to the moving direction and the moving amplitude of the eyeballs of the user, and updating the position of the screen recording area according to the moving direction and the moving distance of the screen recording area.
2. The method according to claim 1, wherein the step of calculating the moving direction and the moving distance of the screen recording area according to the moving direction and the moving amplitude of the eyeball of the user comprises:
acquiring a first user image through a front camera;
acquiring the moving direction and the moving amplitude of the user eyeballs in the first user image;
determining the displacement of the user eyeballs according to the moving direction and the moving amplitude of the user eyeballs in the first user image;
calculating the distance from the eyeballs of the user to the screen according to the ratio of the size of the first user image to the size of the face of the user in the first user image;
and calculating the moving direction and the moving distance of the screen recording area according to the displacement of the eyeballs of the user and the distance from the eyeballs of the user to the screen.
3. The method of claim 2, wherein prior to acquiring the first user image with the front facing camera, the method further comprises:
acquiring a second user image through the front camera;
acquiring the position coordinates of the eyeballs of the user in the second user image;
the acquiring a moving direction and a moving amplitude of the user eyeball in the first user image includes:
acquiring the position coordinates of the eyeballs of the user in the first user image;
and acquiring the moving direction and the moving amplitude of the user eyeballs in the first user image according to the position coordinates of the user eyeballs in the first user image and the position coordinates of the user eyeballs in the second user image.
4. A mobile terminal, comprising:
the display module is used for displaying a screen recording area on a screen of the mobile terminal, and the area of the screen recording area is smaller than that of the screen;
the processing module is used for updating the position of the screen recording area displayed by the display module on the screen according to the movement change of eyeballs of a user and intercepting an image in the screen recording area according to the position of the screen recording area on the screen;
the generating module is used for generating a screen recording file according to the image intercepted by the processing module;
the processing module comprises:
and the second processing submodule is used for acquiring the moving direction and the moving amplitude of the eyeballs of the user, calculating the moving direction and the moving distance of the screen recording area according to the moving direction and the moving amplitude of the eyeballs of the user, and updating the position of the screen recording area according to the moving direction and the moving distance of the screen recording area.
5. The mobile terminal of claim 4, wherein the second processing sub-module comprises:
the acquisition unit is used for acquiring a first user image through the front camera;
the obtaining unit is used for obtaining the moving direction and the moving amplitude of the user eyeballs in the first user image;
the determining unit is used for determining the displacement of the user eyeballs according to the moving direction and the moving amplitude of the user eyeballs in the first user image;
the first calculation unit is used for calculating the distance from eyeballs of the user to the screen according to the proportion of the size of the first user image to the size of the face of the user in the first user image;
and the second calculating unit is used for calculating the moving direction and the moving distance of the screen recording area according to the displacement of the user eyeballs determined by the determining unit and the distance from the user eyeballs to the screen calculated by the first calculating unit.
6. The mobile terminal of claim 5, wherein the mobile terminal further comprises:
the acquisition module is used for acquiring a second user image through the front camera;
the obtaining module is used for obtaining the position coordinates of the eyeballs of the user in the second user image;
the acquisition unit includes:
the first obtaining subunit is configured to obtain position coordinates of the user eyeballs in the first user image;
and the second acquiring subunit is configured to acquire a moving direction and a moving amplitude of the user eyeballs in the first user image according to the position coordinates of the user eyeballs in the first user image and the position coordinates of the user eyeballs in the second user image.
CN201610870935.4A 2016-09-30 2016-09-30 Screen recording method and mobile terminal Active CN106406710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610870935.4A CN106406710B (en) 2016-09-30 2016-09-30 Screen recording method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610870935.4A CN106406710B (en) 2016-09-30 2016-09-30 Screen recording method and mobile terminal

Publications (2)

Publication Number Publication Date
CN106406710A CN106406710A (en) 2017-02-15
CN106406710B true CN106406710B (en) 2021-08-27

Family

ID=59228548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610870935.4A Active CN106406710B (en) 2016-09-30 2016-09-30 Screen recording method and mobile terminal

Country Status (1)

Country Link
CN (1) CN106406710B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107797724A (en) * 2017-06-12 2018-03-13 平安科技(深圳)有限公司 Method, apparatus, computer equipment and computer-readable recording medium are shielded in record of attending a banquet
CN109286718A (en) * 2017-07-21 2019-01-29 珠海格力电器股份有限公司 A kind of record screen method, apparatus and electronic equipment
CN107480245B (en) * 2017-08-10 2018-12-28 腾讯科技(深圳)有限公司 A kind of generation method of video file, device and storage medium
CN108108294B (en) * 2017-12-29 2021-08-24 北京奇虎科技有限公司 Method and system for acquiring customized operation data according to reference time
CN108600513A (en) * 2018-03-28 2018-09-28 努比亚技术有限公司 A kind of record screen control method, equipment and computer readable storage medium
CN108920226B (en) * 2018-05-04 2022-04-29 维沃移动通信有限公司 Screen recording method and device
CN108924452A (en) * 2018-06-12 2018-11-30 西安艾润物联网技术服务有限责任公司 Part record screen method, apparatus and computer readable storage medium
CN109168076B (en) * 2018-11-02 2021-03-19 北京字节跳动网络技术有限公司 Online course recording method, device, server and medium
CN109348156B (en) * 2018-11-29 2020-07-17 广州视源电子科技股份有限公司 Courseware recording and playing method and device, intelligent interactive panel and storage medium
CN110046009B (en) * 2019-02-19 2022-08-23 创新先进技术有限公司 Recording method, recording device, server and readable storage medium
CN109862385B (en) * 2019-03-18 2022-03-01 广州虎牙信息科技有限公司 Live broadcast method and device, computer readable storage medium and terminal equipment
CN110119240A (en) * 2019-04-17 2019-08-13 维沃移动通信有限公司 A kind of record screen method and a kind of mobile terminal
WO2021163880A1 (en) * 2020-02-18 2021-08-26 深圳市欢太科技有限公司 Screen recording method and apparatus and computer-readable storage medium
CN111666024B (en) * 2020-05-28 2022-04-12 维沃移动通信(杭州)有限公司 Screen recording method and device and electronic equipment
CN113742183A (en) * 2020-05-29 2021-12-03 青岛海信移动通信技术股份有限公司 Screen recording method, terminal and storage medium
CN112153436B (en) * 2020-09-03 2022-10-18 Oppo广东移动通信有限公司 Screen recording method, device, equipment and storage medium
CN114189646B (en) * 2020-09-15 2023-03-21 深圳市万普拉斯科技有限公司 Terminal control method and device, electronic equipment and storage medium
CN112162669A (en) * 2020-10-10 2021-01-01 珠海格力电器股份有限公司 Screen recording method and device of intelligent terminal, storage medium and processor
CN114510186A (en) * 2020-10-28 2022-05-17 华为技术有限公司 Cross-device control method and device
CN112637624B (en) * 2020-12-14 2023-07-18 广州繁星互娱信息科技有限公司 Live stream processing method, device, equipment and storage medium
CN113099309A (en) * 2021-03-30 2021-07-09 上海哔哩哔哩科技有限公司 Video processing method and device
CN114189582A (en) * 2021-11-12 2022-03-15 惠州Tcl移动通信有限公司 Screen recording processing method, device, terminal and medium based on mobile terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271519A (en) * 2007-03-22 2008-09-24 阿特尼克斯有限公司 A method and apparatus for detecting faces
CN201413626Y (en) * 2009-06-09 2010-02-24 天津三星电子显示器有限公司 Display with kinescope recording function
CN101893934A (en) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 Method and device for intelligently adjusting screen display
CN102122231A (en) * 2011-03-11 2011-07-13 华为终端有限公司 Screen display method and mobile terminal
CN104184904A (en) * 2014-09-10 2014-12-03 上海斐讯数据通信技术有限公司 Mobile phone screen recording method allowing user to define recording region
CN104967802A (en) * 2015-04-29 2015-10-07 努比亚技术有限公司 Mobile terminal, recording method of screen multiple areas and recording device of screen multiple areas

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2207342B1 (en) * 2009-01-07 2017-12-06 LG Electronics Inc. Mobile terminal and camera image control method thereof
CN105892642A (en) * 2015-12-31 2016-08-24 乐视移动智能信息技术(北京)有限公司 Method and device for controlling terminal according to eye movement

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271519A (en) * 2007-03-22 2008-09-24 阿特尼克斯有限公司 A method and apparatus for detecting faces
CN201413626Y (en) * 2009-06-09 2010-02-24 天津三星电子显示器有限公司 Display with kinescope recording function
CN101893934A (en) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 Method and device for intelligently adjusting screen display
CN102122231A (en) * 2011-03-11 2011-07-13 华为终端有限公司 Screen display method and mobile terminal
CN104184904A (en) * 2014-09-10 2014-12-03 上海斐讯数据通信技术有限公司 Mobile phone screen recording method allowing user to define recording region
CN104967802A (en) * 2015-04-29 2015-10-07 努比亚技术有限公司 Mobile terminal, recording method of screen multiple areas and recording device of screen multiple areas

Also Published As

Publication number Publication date
CN106406710A (en) 2017-02-15

Similar Documents

Publication Publication Date Title
CN106406710B (en) Screen recording method and mobile terminal
EP3661187B1 (en) Photography method and mobile terminal
CN107528938B (en) Video call method, terminal and computer readable storage medium
US11061480B2 (en) Apparatus, method and recording medium for controlling user interface using input image
EP3232299A2 (en) Physical key component, terminal, and touch response method and device
WO2019033957A1 (en) Interaction position determination method and system, storage medium and smart terminal
US9965039B2 (en) Device and method for displaying user interface of virtual input device based on motion recognition
CN107613203B (en) Image processing method and mobile terminal
CN107172347B (en) Photographing method and terminal
WO2019001152A1 (en) Photographing method and mobile terminal
US20150149956A1 (en) Method for gesture-based operation control
US20200218356A1 (en) Systems and methods for providing dynamic haptic playback for an augmented or virtual reality environments
CN106250021B (en) Photographing control method and mobile terminal
KR20140125078A (en) Electronic device and method for unlocking in the electronic device
US20190318169A1 (en) Method for Generating Video Thumbnail on Electronic Device, and Electronic Device
CN106791437B (en) Panoramic image shooting method and mobile terminal
CN107592458B (en) Shooting method and mobile terminal
EP3575917B1 (en) Collecting fingerprints
CN107360375B (en) Shooting method and mobile terminal
EP2939411B1 (en) Image capture
EP3511865A1 (en) Imaging processing method for smart mirror, and smart mirror
US20200257396A1 (en) Electronic device and control method therefor
CN111176601B (en) Processing method and device
CN114296587A (en) Cursor control method and device, electronic equipment and storage medium
CN106990843B (en) Parameter calibration method of eye tracking system and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant