CN109558008B - Control method, control device, computer equipment and storage medium - Google Patents

Control method, control device, computer equipment and storage medium Download PDF

Info

Publication number
CN109558008B
CN109558008B CN201811458612.XA CN201811458612A CN109558008B CN 109558008 B CN109558008 B CN 109558008B CN 201811458612 A CN201811458612 A CN 201811458612A CN 109558008 B CN109558008 B CN 109558008B
Authority
CN
China
Prior art keywords
distance
terminal screen
current
user
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811458612.XA
Other languages
Chinese (zh)
Other versions
CN109558008A (en
Inventor
凌其能
王敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811458612.XA priority Critical patent/CN109558008B/en
Publication of CN109558008A publication Critical patent/CN109558008A/en
Application granted granted Critical
Publication of CN109558008B publication Critical patent/CN109558008B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Abstract

The invention relates to a control method, a control device, computer equipment and a storage medium, wherein the method comprises the following steps: when a terminal screen of the terminal is in an information display state, detecting the distance between the face of a user and the terminal screen by using a tracking ranging system to obtain the current distance between the user and the terminal screen; when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance, prompting the user to adjust the distance between the user and the terminal screen; when the current distance between the user and the terminal screen is monitored to be larger than a first preset distance after the user is prompted to adjust the distance between the user and the terminal screen, stopping the step of detecting the distance between the face of the user and the terminal screen by using the tracking ranging system, and returning to the step of detecting the distance between the face of the user and the terminal screen by using the tracking ranging system when the terminal screen of the terminal is in an information display state until the preset time is reached. The method can save the resources of the terminal.

Description

Control method, control device, computer equipment and storage medium
Technical Field
The present invention relates to the field of terminals, and in particular, to a control method, apparatus, computer device, and storage medium.
Background
With the development of science and technology, more and more applications and more powerful functions can be supported by the terminal, and people often use the terminal in daily life and production activities. Such as reading an e-book and watching a video with a cell phone. At present, in order to meet the user requirements, applications on the terminal usually start multiple functions at the same time, however, as the time for the user to use the terminal increases and the applications running on the terminal increasingly increase, the monitoring tasks running on the terminal increasingly consume a large amount of resources of the terminal.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a control method, an apparatus, a computer device, and a storage medium for solving the problems that the monitoring tasks running on the terminal are more and more, and a large amount of resources of the terminal are consumed.
A method of controlling, the method comprising: when a terminal screen of a terminal is in an information display state, detecting the distance between the face of a user and the terminal screen by using a tracking ranging system to obtain the current distance between the user and the terminal screen; when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance, prompting the user to adjust the distance between the user and the terminal screen; when the distance between the user and the terminal screen is monitored to be larger than a first preset distance after the user is prompted to adjust the distance between the user and the terminal screen, the step of detecting the distance between the face of the user and the terminal screen by using the tracking ranging system is stopped, and the step of detecting the distance between the face of the user and the terminal screen by using the tracking ranging system is returned until the preset time length is reached.
A control device, the device comprising: the distance detection module is used for detecting the distance between the face of a user and a terminal screen of a terminal by utilizing a tracking distance measurement system when the terminal screen of the terminal is in an information display state, so as to obtain the current distance between the user and the terminal screen; the distance adjustment prompting module is used for prompting the user to adjust the distance between the user and the terminal screen when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance; and the detection stopping and returning module is used for stopping the step of detecting the distance between the face of the user and the terminal screen by utilizing the tracking ranging system when the current distance between the user and the terminal screen is monitored to be greater than a first preset distance after the user is prompted to adjust the distance between the user and the terminal screen, and returning the step of detecting the distance between the face of the user and the terminal screen by utilizing the tracking ranging system until the preset time length is reached.
In one embodiment, the apparatus further comprises: and the calling module is used for continuously calling the tracking ranging system to detect the distance between the face of the user and the terminal screen when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance.
In one embodiment, the prompting the user to adjust the distance to the terminal screen includes processing current information displayed in the terminal screen, and the processing of the current information displayed in the terminal screen includes at least one of the following modes: the method comprises the steps of carrying out flicker processing on current information displayed in a terminal screen, segmenting the current information displayed in the terminal screen, carrying out fuzzy processing on the current information displayed in the terminal screen, deforming the current information displayed in the terminal screen and adding an interference element on the current information displayed in the terminal screen.
In one embodiment, the current information displayed in the terminal screen includes an image, the distance adjustment prompting module is configured to perform a blurring process on the current information displayed in the terminal screen, and the distance adjustment prompting module includes: a reference pixel point obtaining unit, configured to obtain a current image displayed in the terminal screen, obtain a current pixel point in the current image, and obtain a plurality of reference pixel points located within a preset range according to a position of the current pixel point; and the fuzzy pixel value calculating unit is used for calculating to obtain a fuzzy pixel value according to the pixel values of the plurality of reference pixel points, and taking the fuzzy pixel value as the pixel value of the current pixel point in the fuzzy image obtained by fuzzy processing.
In one embodiment, the apparatus further comprises: and the recovery module is used for recovering the current information displayed in the terminal screen to be clearly displayed when the current distance between the user and the terminal screen is monitored to be larger than a first preset distance after the current information displayed in the terminal screen is processed.
In one embodiment, the reference pixel point obtaining unit is configured to: acquiring a target reference pixel distance corresponding to the current pixel point according to the current distance between the user and the terminal screen, wherein the target reference pixel distance and the current distance between the user and the terminal screen are in a negative correlation relationship; and taking the pixel point with the distance from the current pixel point to the target reference pixel point as a reference pixel point.
In one embodiment, the distance adjustment prompting module is configured to perform fuzzy processing on the current information displayed in the terminal screen, and the fuzzy processing on the current information displayed in the terminal screen includes: determining a corresponding target fuzzy processing parameter according to the current distance between the user and the terminal screen, and carrying out fuzzy processing on current information displayed in the terminal screen according to the target fuzzy processing parameter, wherein the fuzzy degree corresponding to the target fuzzy processing parameter is in a negative correlation relation with the current distance between the user and the terminal screen.
In one embodiment, the distance detection module comprises an attribute information acquisition unit, a distance detection unit and a distance detection unit, wherein the attribute information acquisition unit is used for acquiring target attribute information corresponding to current information displayed in a terminal screen when the terminal screen of the terminal is in an information display state, and the target attribute information comprises attribute information obtained by classifying according to the age of a viewing object corresponding to the current information; the terminal comprises a quick opening mode entering unit, a quick opening mode display unit and a prompt unit, wherein the quick opening mode entering unit is used for entering an eye protection mode quick opening mode when the target attribute information meets a preset type, and the terminal automatically opens the eye protection mode or automatically displays eye protection mode opening prompt information under the eye protection mode quick opening mode; and the distance detection unit is used for detecting the distance between the face of the user and the terminal screen by utilizing a tracking ranging system when the eye protection mode is started, so as to obtain the current distance between the user and the terminal screen.
In one embodiment, the distance detection module is configured to: opening a session control object, wherein the session control object is configured with configuration information corresponding to face tracking; the conversation control object controls a camera to acquire images and acquire image frames; and performing face detection on the image frame, setting a virtual anchor point on a face when the face appears in the image frame, performing face tracking according to the virtual anchor point, and determining the current distance between the user and the terminal screen according to the current position of the tracked face and the position of the camera.
In one embodiment, the apparatus further comprises: and the stop and continuous playing control module is used for automatically stopping playing the current information played in the terminal screen when the current distance between the user and the terminal screen is smaller than or equal to a second preset distance, and starting to continuously play the current information from the position where the playing is stopped until the current distance between the user and the terminal screen is monitored to be larger than the second preset distance.
In one embodiment, the distance adjustment prompting module is configured to: and when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance, calling a voice reminding interface to play voice reminding information, wherein the voice reminding information comprises eye protection reminding information.
A computer device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the above-mentioned control method.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, causes the processor to carry out the steps of the above-mentioned control method.
According to the control method, the control device, the computer equipment and the storage medium, when the current distance between the face and the terminal screen is smaller than or equal to the first preset distance, the user is prompted to adjust the distance between the face and the terminal screen, the user can be reminded to keep the distance between the face and the terminal screen, eyesight is protected, when the current distance between the user and the terminal screen is detected to be larger than the first preset distance, the distance between the face of the user and the terminal screen is stopped being detected by the tracking ranging system, and the distance between the face of the user and the terminal screen is detected by the tracking ranging system again until the preset time is reached, so that monitoring tasks are reduced, the resources of the terminal are saved, and meanwhile, the prompted time can be coordinated with the current distance change between the user and the terminal.
Drawings
FIG. 1 is a diagram of an application environment of a control method provided in one embodiment;
FIG. 2 is a flow chart of a control method in one embodiment;
FIG. 3A is a diagram illustrating a distance between a computing terminal screen and a face in one embodiment;
FIG. 3B is a diagram illustrating a prompt for prompting a user to increase a distance from a terminal screen when playing a video according to an embodiment;
fig. 4 is a schematic diagram illustrating the remaining power after the video is played under different conditions and the distance between the face of the user and the terminal screen is detected in one embodiment;
fig. 5A is a flowchart illustrating that, in an embodiment, when a terminal screen of a terminal is in an information display state, a tracking ranging system is used to detect a distance between a face of a user and the terminal screen, so as to obtain a current distance between the user and the terminal screen;
FIG. 5B is a diagram illustrating an example of a prompt dialog box displaying an eye-protection mode on a video playback interface;
FIG. 5C is a diagram illustrating a display of a settings interface on a video playback interface, in accordance with an embodiment;
FIG. 5D is a diagram illustrating a prompt box for reminding a user to turn on camera permissions on a video playback interface in one embodiment;
FIG. 5E is a diagram illustrating an embodiment of displaying, on a video playback interface, a prompt box to a camera permission setting page to set a prompt box that allows a video application to have camera permissions;
FIG. 6 is a flowchart illustrating an embodiment of detecting a distance between a face of a user and a terminal screen by using a tracking ranging system to obtain a current distance between the user and the terminal screen;
FIG. 7A is a flow chart illustrating a control method according to an embodiment;
FIG. 7B is an architecture diagram of a service implementation module, according to one embodiment;
FIG. 8 is a flow chart of a control method in one embodiment;
FIG. 9 is a block diagram showing the structure of a control device in one embodiment;
FIG. 10 is a block diagram showing an internal configuration of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another. For example, the first preset distance may be referred to as a second preset distance, and similarly, the second preset distance may be referred to as a first preset distance, without departing from the scope of the present application.
Fig. 1 is a diagram of an application environment of a control method provided in an embodiment, as shown in fig. 1, in the application environment, including a terminal 110 and a server 120. An information display application in the terminal 110, such as a video playing application, receives a video playing request, acquires a video corresponding to the video playing request from the server 120 for playing, when the terminal 110 plays the video, can receive an eye protection mode starting operation request input by a user, the terminal 110 starts an eye protection mode according to the eye protection mode starting operation request, detects the distance between the face of the user and the terminal screen by using a tracking ranging system in the terminal 110 to obtain the current distance between the user and the terminal screen, prompts the user to adjust the distance between the user and the terminal screen when the current distance between the user and the terminal screen is less than or equal to a first preset distance, stops executing the step of detecting the distance between the face of the user and the terminal screen by using the tracking ranging system when the current distance between the user and the terminal screen is monitored to be greater than the first preset distance after the user is prompted to adjust the distance between the user and the terminal screen, and when the preset time is up, detecting the distance between the face of the user and the terminal screen by using the tracking ranging system in the terminal 110.
It will be appreciated that the above application environment is only one example, and in some embodiments, the video may be stored locally in the terminal, or the displayed information may be any information that needs to be viewed by the eye, for example, the information may be one or more of pictures and text.
The server 120 may be an independent physical server, or may be a server cluster formed by a plurality of physical servers, and may be a cloud server providing basic cloud computing services such as a cloud server, a cloud database, a cloud storage, and a CDN. The terminal 110 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The information display application is an application for displaying information, such as a picture browsing application, a video playing application, or a reading application. For example, the picture browsing application may be a beautiful view, and the video playing application may be an application for playing videos such as an Tencent video, an Aiqiyi video, and the like. The reading application may be an application for reading news, novels, magazines, etc. The terminal 110 and the server 120 may be connected through a communication connection manner such as a network, and the present invention is not limited thereto.
As shown in fig. 2, in an embodiment, a control method is proposed, and this embodiment is mainly illustrated by applying the method to the terminal 110 in fig. 1. The method specifically comprises the following steps:
step S202, when the terminal screen of the terminal is in the information display state, the tracking ranging system is used for detecting the distance between the face of the user and the terminal screen, and the current distance between the user and the terminal screen is obtained.
Specifically, the information display state means that information is being displayed on a terminal screen, and the type of the information may be one or more of characters, pictures and videos, which is not limited specifically. For example, the currently presented information may include a video including video frame images and corresponding text thereon. The tracking and ranging system is used for tracking a target object to be tracked and ranging according to the position of the target object in the space. The tracking ranging system may be provided in the terminal, or may be a tracking ranging system independent of the terminal. For example, the tracking and ranging system may be a VIO (Visual-Inertial odometer). Real-time means that the tracking ranging system will follow the movement of the target object to dynamically update the detected distance during the beginning of tracking. The current distance refers to the distance between the face of the user and the terminal screen at the current time, and the current distance is continuously changed along with the movement of the face. The terminal takes the face of a user as an object to be tracked, and tracks the face of the user to obtain the current distance between the user and the terminal screen.
In one embodiment, a system for implementing an AR (Augmented Reality) function in a terminal may be adopted to detect a distance between a face of a user and a terminal screen. When the system using the AR function detects the distance between the face of the user and the terminal screen, the configuration corresponding to face tracking is started, the configuration for establishing the corresponding relation between the real world where the equipment is located and the virtual 3D coordinate space and modeling the virtual content is closed, and therefore the face tracking can be achieved without starting the function of augmented reality. For example, the distance between the face of the user and the terminal screen may be detected using an ARkit framework, where, when detecting, an interface corresponding to FaceTrackingConfiguration is opened, but an interface corresponding to WorldTrackingConfiguration is not opened. The ARkit is an augmented reality development framework developed by apple, obtains the position of a face by calling a depth camera, such as an infrared dot-matrix camera, of a terminal to shoot, and obtains the distance between the face and a terminal screen according to the position of the face and the position of the camera, and the facetracking configuration is a configuration for configuring the terminal to track the face by using the camera. WorldTrackingConfiguration is a configuration for configuring a terminal to use a rear camera, track the orientation and position of a device, and detect a real-world plane.
In one embodiment, as shown in fig. 3A, when calculating the distance, in the world coordinate system, assuming that the camera is located at the origin of the coordinates, i.e., (0, 0, 0), and the position where the face is recognized is assumed to be (a, b, c), and the distance between the camera and the face is taken as the distance between the face of the user and the terminal screen, the distance d between the face of the user and the terminal screen can be obtained by using formula (1).
Figure BDA0001888207860000071
And step S204, when the current distance between the user and the terminal screen is less than or equal to a first preset distance, prompting the user to adjust the distance between the user and the terminal screen.
In particular, the first preset distance may be configured as desired, for example 40 centimeters. The current information refers to information currently displayed in the terminal screen. The manner of prompting the user to adjust the distance from the terminal screen may be set as required, and may include one or more of the following manners, for example: 1. reducing the definition of the currently displayed information in the terminal screen; 2. blurring the information currently displayed in a terminal screen, and 3, reducing the brightness of a display screen of the terminal; 4. giving a warning sound or displaying a striking icon, such as a pop-up prompt box, a yellow exclamation mark, an explosion pattern and other figures; 5. flashing a backlight of the display screen; 6. the color of the terminal display screen or the color of the information display interface is changed to a specific color, such as yellow.
In one embodiment, when performing the blurring processing, the blurring processing may be performed on all current information, or may be performed on part of the current information, for example, the blurring processing may be performed on a picture, and the blurring processing may not be performed on characters. The fuzzy processing can enable the fuzzy degree of the fuzzy current information to be higher than that of the current information before the fuzzy processing, and the user can not easily see the fuzzy current information. The way of the blurring processing can be set according to needs, for example, some information can be added to the current information, for example, a semi-transparent or opaque picture can be superimposed on the current information. For the current information of the image type, the pixel value corresponding to the image can be processed, the details of the image are reduced, and a blurred picture is obtained.
In an embodiment, for an image, a reference pixel point corresponding to each pixel point needing blurring processing in the image can be obtained, a blurring pixel value corresponding to the pixel point needing blurring processing is obtained according to the reference pixel point, and the blurring pixel value is used as a pixel value of the pixel point in the blurring image obtained through blurring processing. The reference pixels may include pixels having a distance to the pixel to be blurred smaller than a preset pixel distance. For example, for the pixel point a, a pixel point adjacent to the pixel point a may be used as a reference pixel point, and an average value of the reference pixel points is calculated as a blurred pixel value of the pixel point a. In this way, the details of the image can be lost, thereby achieving the blurring effect.
Step S206, after the user is prompted to adjust the distance between the user and the terminal screen, when the current distance between the user and the terminal screen is monitored to be larger than a first preset distance, the step of detecting the distance between the face of the user and the terminal screen by using the tracking distance measuring system is stopped, and the step of detecting the distance between the face of the user and the terminal screen by using the tracking distance measuring system is returned until the preset time length is reached.
Specifically, the preset time period may be set as needed, and may be 4 seconds, for example. And if the current distance between the user and the terminal screen is greater than a first preset distance, stopping executing the step of detecting the distance between the face of the user and the terminal screen by using the tracking ranging system, timing by using a timer, and detecting the distance between the face of the user and the terminal screen by using the tracking ranging system in the terminal again if the terminal screen of the terminal is still in an information display state when the preset time is reached.
In one embodiment, the preset duration may be greater than 2 seconds but less than 60 seconds. In the duration range, when the distance between the face and the terminal screen is smaller than or equal to the first preset distance, the user can be prompted to adjust the distance between the face and the terminal screen in time, so that the user can notice that the distance between the face and the terminal screen is too small in time, and the tracking and ranging system cannot be started frequently, and therefore terminal resources can be further saved.
As shown in fig. 3B, a video of "xx children 01" is being played in a video application of the terminal, and if it is detected that the current distance between the user and the terminal screen is less than or equal to a first preset distance, text information "close to me becomes fuzzy" can be displayed above the current video, and then the user can be prompted to increase the distance between the user and the terminal screen. Of course, other ways can be adopted to remind the user to increase the distance from the terminal screen. For example, the reminder is given by voice.
In one embodiment, as shown in fig. 4, in order to continuously use the tracking and ranging system, start the tracking and ranging system at different preset durations and detect the distance between the face of the user and the terminal screen without using the tracking and ranging system (i.e., the eye protection mode), under the same other test conditions, after a set of preset 45-minute videos is played, the corresponding remaining power is shown. As can be seen from fig. 4, if the tracking ranging system in the terminal is continuously used to detect the distance between the face of the user and the screen of the terminal, the remaining power is only 77%, and the power consumption is too large. When the preset time is 1S, 2S, 3S and 4S, the remaining power is 81%, 85%, 88% and 89% in sequence, so the preset time can be 3S, which does not consume too much power and can be detected in time. Wherein, the same test environment means: the initial power of the first-time used mobile phone with the same model and the same configuration is 100%, the same mobile phone parameter settings such as the sound and brightness of video playing are consistent, and no other additional functions are enabled except for playing the video.
According to the control method, when the current distance between the face and the terminal screen is smaller than or equal to the first preset distance, the user is prompted to adjust the distance between the face and the terminal screen, the user can be reminded to keep the distance between the face and the terminal screen, eyesight is protected, when the current distance between the user and the terminal screen is detected to be larger than the first preset distance, the distance between the face of the user and the terminal screen is stopped being detected by the tracking ranging system until the preset time is reached, the distance between the face of the user and the terminal screen is detected by the tracking ranging system again, monitoring tasks are reduced, resources of the terminal are saved, and meanwhile the prompted time can be coordinated with the change of the current distance between the user and the terminal.
For example, if the ARKit framework is used to perform distance detection, a GPU (graphics processing Unit) of the terminal may be called, and if the movement of a face is detected through a camera in real time, the GPU may be continuously called, which consumes very much power. And if the ARKit frame is adopted to carry out distance detection every preset time, the number of times of calling the CPU of the terminal is greatly reduced, so that the electric quantity of the terminal can be saved.
In one embodiment, when the current distance between the user and the terminal screen is less than or equal to a first preset distance, the tracking ranging system is continuously invoked to detect the distance between the face of the user and the terminal screen.
Specifically, during the period that the current distance between the user and the terminal screen is detected to be less than or equal to the first preset distance, the tracking ranging system may be continuously used for detecting the distance between the face of the user and the terminal screen in real time, at this time, the tracking system may track the position of the face in real time, and acquire the image frames at intervals of a preset frequency, for example, acquire the image frames at 30 times per second for distance detection. Therefore, when the current distance between the user and the terminal screen is smaller than or equal to the first preset distance, the change of the current distance between the face of the user and the terminal screen can be detected in real time, and when the current distance between the user and the terminal screen is larger than the first preset distance, the face can be timely stopped being detected by adopting the tracking ranging system.
In one embodiment, prompting the user to adjust the distance to the terminal screen includes processing current information displayed in the terminal screen in a manner that includes at least one of: the method comprises the steps of flashing current information displayed in a terminal screen, segmenting the current information displayed in the terminal screen, blurring the current information displayed in the terminal screen, deforming the current information displayed in the terminal screen and adding an interference element to the current information displayed in the terminal screen.
Specifically, the blinking process refers to blinking the information, and the frequency of blinking may be random or preset, for example, blinking once every 2 seconds. The information is divided into a plurality of parts, and the deformation of the current information displayed on the terminal screen is distortion deformation of the information, such as rigid deformation or non-linear deformation of characters, and may be deformation of an image, such as deformation of a person in the image. Adding an interference element to the current information displayed in the terminal screen refers to adding some elements, such as lines or circles, on the information to interfere the human eyes to browse the information.
In one embodiment, after processing the current information displayed in the terminal screen, when it is monitored that the current distance between the user and the terminal screen is greater than a first preset distance, the current information displayed in the terminal screen resumes clear display.
Specifically, when it is monitored that the current distance between the user and the terminal screen is greater than a first preset distance, the current information displayed in the terminal screen is not processed, so that the current information displayed in the terminal screen is restored to be clearly displayed. The clear display resuming means that the current information is normally displayed information, that is, the current information may not be processed any more to remove the processing effect, so that the current information resumes normal display. For example, the interference element added to the information is removed, the information is not deformed, or the information is not subjected to the blinking process, or the like.
In one embodiment, if the current information displayed in the terminal screen is processed, the current information does not change, for example, the current information is a text or the played information is paused, the resuming of the clear display of the current information displayed in the terminal screen may refer to removing the effect of the processing on the current information, for example, removing a semi-transparent or opaque picture overlaid on the current information. If the current information displayed in the terminal screen is processed and is continuously changed along with the time, the clear display of the current information displayed in the terminal screen is recovered, namely the current information displayed at present is not processed. For example, during live broadcasting, if it is detected that the current distance between the user and the terminal screen is smaller than or equal to the first preset distance, the live video is not paused, but the video image in the live broadcasting can be continuously subjected to fuzzy processing, so that the played current video image is displayed in a fuzzy manner, and when it is detected that the current distance between the user and the terminal screen is greater than the first preset distance, the fuzzy processing is stopped, so that the played current video image is displayed clearly. It can be understood that the clear display restoration is relative to the blurred image, that is, the clear display restoration of the current picture may mean that the resolution of the current picture is higher than that of the blurred image, but the resolution of the image may not be required, and is determined according to the resolution of the image itself, and the clear displayed image is not limited to be larger than the preset resolution.
In one embodiment, the current information displayed in the terminal screen includes an image, such as an image in a video. For the information of the image type, prompting the user to adjust the distance from the terminal screen comprises blurring the current information displayed in the terminal screen, and the blurring the current information displayed in the terminal screen comprises: acquiring a current image displayed in a terminal screen, acquiring current pixel points in the current image, and acquiring a plurality of reference pixel points in a preset range according to the positions of the current pixel points; and calculating to obtain a fuzzy pixel value according to the pixel values of the plurality of reference pixels, and taking the fuzzy pixel value as the pixel value of the current pixel in the fuzzy image obtained by fuzzy processing.
Specifically, the current pixel point is a current pixel point when the fuzzy pixel value is calculated, and since the current image includes a plurality of pixel points, wherein a plurality of pixel points need to calculate the fuzzy pixel value, when the fuzzy pixel value corresponding to the a pixel point is calculated, the a pixel point is the current pixel point, and when the fuzzy pixel value corresponding to the B pixel point is calculated, the B pixel point is the current pixel point. The preset range may be set as desired. For example, the reference pixel may be a pixel whose distance from the current pixel to the current pixel is smaller than the preset pixel distance, or a pixel located above and to the left of the current pixel. The preset range may be one range or a plurality of ranges. For example, the reference pixel point may be a pixel point located on an extension line of two diagonal lines of the current pixel point. The blurred pixel value refers to a pixel value of a current pixel point in a blurred image obtained by blurring the current image. The reference pixel point is a pixel point to be referred to for calculating the fuzzy pixel value of the current pixel point. The blurred pixel value is obtained by statistics according to the pixel value of the reference pixel, and may be, for example, an average value, a median, or a mode of the reference pixel.
In one embodiment, the blurred pixel value is obtained by performing weighted summation according to the pixel value of the reference pixel point and the corresponding weight. The weight corresponding to the reference pixel point may be fixed or determined according to the distance between the reference pixel point and the current pixel point. The distance between the reference pixel point and the current pixel point and the weight can form a positive correlation relationship or a negative correlation relationship. If the positive correlation is formed, the larger the distance between the reference pixel point and the current pixel point is, the larger the weight is, for example, if the reference pixel point and the current pixel point are adjacent pixel points, the weight may be 0.02, and if the reference pixel point is a pixel point which is one pixel point away from the current pixel point, the weight may be 0.1. If the correlation relationship is negative, the larger the distance between the reference pixel point and the current pixel point is, the smaller the weight is, for example, if the reference pixel point and the current pixel point are adjacent pixel points, the weight may be 0.1, and if the reference pixel point is a pixel point which is one pixel point away from the current pixel point, the weight may be 0.02.
In one embodiment, obtaining a plurality of reference pixels located within a preset range according to the position of the current pixel includes: acquiring a target reference pixel distance corresponding to a current pixel point according to the current distance between a user and a terminal screen, wherein the target reference pixel distance and the current distance between the user and the terminal screen are in a negative correlation relationship; and taking the pixel point with the distance from the current pixel point smaller than the target reference pixel distance as a reference pixel point.
Specifically, the negative correlation relationship means: the two variables have different changing directions, and when one variable changes from large to small, the other variable changes from small to large. The positive correlation relationship means that: the two variables have the same changing direction, and when one variable changes from large to small, the other variable changes from large to small. For example, if two variables a and b are included, if b becomes larger, a becomes larger, and a and b are in a positive correlation relationship. If b becomes small and a becomes large, a and b are in a negative correlation relationship. The reference pixel distance is in a negative correlation with the current distance between the user and the terminal screen, and then: the larger the current distance between the user and the terminal screen is, the smaller the corresponding target reference pixel distance is; the smaller the current distance between the user and the terminal screen is, the larger the corresponding target reference pixel distance is. When the pixel point with the distance from the current pixel point smaller than the target reference pixel distance is taken as the reference pixel point, if the target reference pixel distance is increased, the number of the reference pixel points is increased, and if the target reference pixel distance is decreased, the number of the reference pixel points is decreased. The image can lose more details due to more reference pixel points, so that the higher the fuzziness of the image obtained by fuzzy processing is, the larger the fuzziness of the image is, and therefore under the condition that the current distance between the user and the terminal screen is smaller than or equal to the first preset distance, the higher the fuzziness of the current information is, the user can more and more not see the currently displayed image along with the reduction of the current distance between the user and the terminal screen, so that the user can consciously keep away from the terminal, the distance between the user and the terminal screen is increased, and the eyesight of the user is better protected.
In one embodiment, prompting the user to adjust the distance to the terminal screen includes blurring current information displayed in the terminal screen, and blurring the current information displayed in the terminal screen includes: determining a corresponding target fuzzy processing parameter according to the current distance between the user and the terminal screen, and carrying out fuzzy processing on current information displayed in the terminal according to the target fuzzy processing parameter, wherein the fuzziness corresponding to the target fuzzy processing parameter is in a negative correlation relation with the current distance between the user and the terminal screen.
Specifically, the degree of ambiguity refers to the degree of ambiguity in the current information, with the greater the degree of ambiguity, the more ambiguous. The degree of blur can be adjusted by adjusting the target blur processing parameters. If the current information is text, the resolution corresponding to the text can be adjusted. If the current information is a picture, the target blurring processing parameter may be a range of reference pixel points, and the more the reference pixel points are, the more details of the pixel points are lost, so the greater the blurring degree is. The ambiguity corresponding to the fuzzy processing parameter has a negative correlation with the current distance between the user and the terminal screen, and under the condition that the current distance between the user and the terminal screen is smaller than or equal to a first preset distance, the ambiguity of the current information becomes larger as the current distance between the user and the terminal screen becomes smaller, and the user cannot see the currently displayed picture more and more, so that the user can keep away from the terminal by self, the distance between the user and the terminal screen is increased, and the eyesight of the user is better protected.
In one embodiment, the control method further comprises: and when the current distance between the user and the terminal screen is smaller than or equal to a second preset distance, automatically stopping playing the current information played in the terminal screen, and continuing playing the current information from the position where the playing is stopped until the current distance between the user and the terminal screen is monitored to be larger than the second preset distance.
Specifically, the second preset distance may be set as needed. For example, 30 cm. The second predetermined distance may be the same as or different from the first predetermined distance. For the continuously played information, when the current distance between the user and the terminal screen is detected to be smaller than or equal to a second preset distance, the current information is automatically stopped to be played until the current distance between the user and the terminal screen is detected to be larger than the second preset distance, and then the current information is continuously played from the position where the playing is stopped, so that the playing and the playing view stopping can be carried out adaptively according to the distance between the user and the terminal screen. For example, if it is detected that the current distance between the user and the terminal screen is less than or equal to a second preset distance when the video is played, the playing of the video is suspended. And when the current distance between the user and the terminal screen is monitored to be larger than a second preset distance, continuing to play the video.
In one embodiment, the second predetermined distance is less than the first predetermined distance. Therefore, after the current information is subjected to fuzzy processing, if the current distance between the face of the user and the terminal screen is continuously reduced, the information is automatically stopped to be played, the prompting strength for prompting the user to improve the distance between the face and the terminal screen is enhanced, and the eyesight is protected.
In one embodiment, when the current distance between the user and the terminal screen is less than or equal to a first preset distance, the voice reminding interface is called to play voice reminding information, and the voice reminding information comprises eye protection reminding information.
Specifically, the eye protection reminding information is used for prompting the user to protect eyesight. Of course, the voice reminding information can also comprise other information, such as information watching duration and the like. When the current distance between the user and the terminal screen is smaller than or equal to a first preset distance, the user is further reminded to increase the distance between the face and the screen in a mode of playing voice reminding information. For example, the voice reminding message may be "too close to the screen, please move a little further away from the screen".
In one embodiment, when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance, permission verification prompt information can be further acquired, and when the permission verification of the user passes, the current information displayed in the terminal screen is clearly displayed. For example, when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance, prompt information for acquiring the fingerprint is displayed, and when the received fingerprint is the preset fingerprint, the current information displayed in the terminal screen is restored to be clearly displayed.
In an embodiment, as shown in fig. 5A, the step S202 of detecting the distance between the face of the user and the terminal screen by using a tracking ranging system in the terminal when the terminal screen of the terminal is in the information display state, and obtaining the current distance between the user and the terminal screen may specifically include the following steps:
step S502, when the terminal screen of the terminal is in the information display state, acquiring the target attribute information corresponding to the current information displayed in the terminal screen, wherein the target attribute information comprises attribute information obtained by classifying according to the age of the watching object corresponding to the current information.
Specifically, the target attribute information corresponding to the current information is used to describe the attribute of the current information. The viewing object refers to an object of viewing information. The information may be classified according to age information of the viewing object. For example, the information may be divided into a child video and a non-child video according to the age of the viewing object, the child video being a video created for a child, and the viewing object suitable for the child whose age is smaller than a preset age.
In one embodiment, the target attribute information may include attribute information classified according to the age of the viewing object corresponding to the current information, and other attribute information, for example, the target attribute information may include a subject of the current information, whether the subject is a video film belonging to a comedy subject or a crime-type video.
And step S504, when the target attribute information meets the preset type, entering an eye protection mode quick start mode, wherein in the eye protection mode quick start mode, the terminal automatically starts the eye protection mode or automatically displays eye protection mode start prompt information.
Specifically, the preset types may be set as needed, and there may be one or more preset types. The plurality means two or more, including a plurality. If the preset type is multiple, one or more of the preset types may be satisfied. The eye protection mode opening prompt message is used for prompting a user whether to open the eye protection mode. And if the user inputs the operation corresponding to the eye protection mode according to the eye protection mode opening information, the terminal opens the eye protection mode. Under the quick open mode of eyeshield mode, the eyeshield mode also can be opened automatically to the terminal. The automatic eye protection mode opening means that an instruction for opening the eye protection mode is automatically triggered without operation of a user. However, if other operations are required during the opening of the eye-protection mode, the operation may be manual or automatic. For example, if the user is required to input an account to log in after the instruction of turning on the eye protection mode is automatically triggered, the account and the password manually input by the user may be received. If after the instruction of starting the eye protection mode is automatically triggered, the user is required to click the key corresponding to the 'application access allowed camera', the key corresponding to the 'application access allowed camera' can be clicked on the screen of the terminal in a display mode, and the eye protection mode is started when the click operation of the user is received.
For example, when the target attribute information corresponding to the current information is a children video type and the preset type is a children video, the eye protection mode is started quickly. As shown in fig. 5B, in the eye protection mode, if the terminal never turns on the eye protection mode, a prompt box in the eye protection mode may be displayed to remind the user whether the eye protection mode needs to be turned on, and if the user clicks on "turn on" operation, the eye protection mode is turned on. If the terminal opens the eye protection mode, the instruction of opening the eye protection mode can be automatically triggered. If the target attribute information corresponding to the current information is of a non-juvenile video type, the user can click a setting control piece' a.
When an instruction of opening the eye protection mode is received, if the application does not have the authority to access the camera, as shown in fig. 5D, the user can be reminded to open the camera authority by popping up the frame. If the click of "not allowed operation" is received, another reminding box may be prompted as shown in fig. 5E to prompt the user to go to the camera right setting page to set the permission that the video application has the camera right.
Step S506, when the eye protection mode is started, the distance between the face of the user and the terminal screen is detected by using the tracking ranging system in the terminal, and the current distance between the user and the terminal screen is obtained.
Specifically, in the eye protection mode, the step of detecting the distance between the face of the user and the terminal screen by using a tracking ranging system in the terminal to obtain the current distance between the user and the terminal screen is performed, and the control method provided by each embodiment of the invention is executed. In the embodiment of the invention, the eye protection mode is started quickly under the condition of playing the preset type of information, so that the efficiency of starting the eye protection mode and the utilization rate of the eye protection mode can be improved when the information corresponding to the specific age bracket is played, and the vision of the watching object in the specific age bracket is better protected.
In an embodiment, as shown in fig. 6, detecting the distance between the face of the user and the terminal screen by using a tracking ranging system in the terminal to obtain the current distance between the user and the terminal screen may specifically include the following steps:
step S602, a session control object is opened, and the session control object is configured with configuration information corresponding to the face tracking.
Specifically, a session control (session) object is an object for tracking a session process, for storing information required for a specific session and controlling a corresponding flow. The session refers to a process of communication by an interactive system in the terminal. The session control object is configured with configuration information corresponding to face tracking, so that the face can be tracked.
And step S604, controlling the camera to acquire images by the session control object to acquire image frames.
Step S606, carrying out face detection on the image frame, when a face appears in the image frame, setting a virtual anchor point on the face, carrying out face tracking according to the virtual anchor point, and determining the current distance between the user and the terminal screen according to the current position of the tracked face and the position of the camera.
In particular, a virtual Anchor point (Anchor) is used for determining the position. The camera is a depth camera, the collection of the image frames is real-time, and if a human face appears in the image frames, the terminal can create a virtual anchor point object to be added into the conversation. Therefore, the virtual anchor point can track the human face, and the distance between the position of the virtual anchor point and the position of the camera is calculated and used as the distance between the human face of the user and the terminal screen. When the human face leaves the camera recognition area, the virtual anchor point can be removed, and when the human face is recognized by the camera, the human face is tracked continuously.
In one embodiment, the session control (session) object may be an ARSession in ARKit. For example, may be an arssession in the arsnuew auxiliary view in ARKit, which is a shared object of device cameras and motion processing needed to manage the augmented reality experience. When an arsission is initialized, a configuration corresponding to face tracking, such as FaceTrackingConfiguration configuration, may be added. Therefore, when the terminal screen of the terminal is in an information display state, the ARSession is started, and the starting of the ARSession automatically triggers the camera to perform face recognition, namely tracking due to the addition of the configuration corresponding to the face tracking. When a face appears in the camera recognition area, the camera can capture the face and record and track real-time changes of the face in an ARKit coordinate system.
The ARSCNView is inherited to SCNView in a Scenekit framework, the SCNView is also inherited to UIView in a UIkit framework, the UIkit is a lightweight and modular front-end framework, a powerful web front-end interface can be quickly constructed, and the Scenekit is a framework for describing the creation of a 3D game and adding 3D content to an application program. UIView defines a rectangular area on the screen and an interface to manage the area contents. At run-time, one view object controls the rendering of the region. SCNView is a view showing a 3D (three-dimensional) model object. The ARSCNView view has three characteristics 1, and a picture is acquired on an equipment camera and rendered to a 3D scene for display; 2. converting the three-dimensional coordinate system of the ARKit and the three-dimensional coordinate system of the SceneKit; 3. and displaying the object virtualized on the SceneKit in a real-world coordinate system, and tracking. Therefore, if the configuration corresponding to FaceTrackingConfiguration is added to the arsission and the WorldTrackingConfiguration is closed, face distance detection can be performed without turning on the augmented reality function.
In one embodiment, as shown in fig. 7A, the control method may further include the steps of:
step S702, a service implementation module corresponding to the control method is obtained, and the service implementation module comprises a service interface, a target sub-module set for implementing the control method and sub-module operation logic.
Specifically, the service implementation module is a module for implementing the control method. The service interface is used for calling an application for displaying information, such as a playing module of a video playing application. The target sub-module set is a set of all sub-modules for implementing the control method. The sub-module operation logic comprises the operation sequence among the sub-modules and the information flow direction.
Step S704, associating the service implementation module with the information display module of the application program in the terminal.
Specifically, the service implementation module is associated with the information display module of the application program in the terminal, so that when the information display module runs, the service interface can be called to trigger execution of the control method provided by the embodiment of the present invention.
Step S706, when the information display module runs, the information display module calls the service interface to trigger the step of running the target sub-module of the service implementation module according to the sub-module running logic.
Step S708, if the current information displayed in the terminal screen is blurred, the current information obtained by the blurring process is sent to the information display module through the service interface, and the information display module displays the current information obtained by the blurring process.
Specifically, the information display module operates to display information. The service interface can be automatically called when the information display module runs, or the service interface can be called when the information display module runs and the eye protection mode is started. When the information display module calls a service interface to acquire corresponding information to be displayed, the step of operating a target sub-module of the service implementation module according to the sub-module operation logic is triggered to implement the control method, so that when the current distance between a user and a terminal screen is smaller than or equal to a first preset distance, the sub-module in the service implementation module performs fuzzy processing on the current information displayed in the terminal screen to obtain the information after the fuzzy processing, and the information is transmitted to the information display module through the service interface to be displayed.
The architecture of the service implementation module may be as shown in fig. 7B, and the sub-modules include a bottom layer distance detection module, a data processing module, a service interface implementation module, a module corresponding to an external interface protocol, a data holding module, and a voice prompt module. The bottom layer distance detection module is used for detecting the distance between the face and the terminal screen, wherein the bottom layer distance detection module can use the distance between the face and the camera as the distance between the detected face and the terminal screen. The data processing module converts the current distance data between the human face and the terminal screen into state data which can be understood by an upper layer application, for example, the state data can be subjected to fuzzy processing or not. The service interface implementation module provides a series of interfaces for external service modules to call, wherein the external is application for displaying information. The service interface implementation module provides a series of interfaces, which may include, for example, interfaces for starting face distance detection, stopping face distance detection, starting voice prompt, stopping voice prompt, and the like. The module corresponding to the external interface protocol provides a list of methods exposed to external calls. The method list exposed to the external call may include, for example, a method of entering an eye protection mode, a method of playing and pausing a voice prompt, a method of checking whether an eye protection mode switch is turned on, and a method of whether the current terminal supports the eye protection mode. The data holding module stores local holding information to be stored of the whole service, and the local holding information can include information such as a current state identifier of the eye protection mode switch and a time interval of the timer. The voice prompt module encapsulates a voice SDK, such as an Apollo voice SDK, and provides the service interface with the implementation of voice playing. The voice SDK (Software Development Kit) may be any SDK that provides voice functionality, where Apollo is the name of a self-developed semantic SDK. When the information display module obtains information which enters an eye protection mode by using a method list which is exposed to external calling, an interface which starts to carry out face detection is called, a bottom layer distance module is triggered to detect the distance between the face of a user and a terminal screen according to a timing interval, when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance, a data processing module judges that the information needs to be subjected to fuzzy processing, a service interface implementation module can call an interface corresponding to the fuzzy processing to carry out fuzzy processing on the current information, send the fuzzy processed information to the information display module to be displayed, and call a voice interface to carry out voice reminding.
In the embodiment of the invention, the service implementation module is acquired and associated with the information display module, and the service implementation module is called to implement the control method when the information display module runs, so that the decoupling of the service implementation module and the information display module can be realized, the information display application does not need to know the running logic and the relevant configuration of the service implementation module, and the control method can also be realized by using the service implementation module.
Referring to fig. 8, the following describes a control method provided by an embodiment of the present invention in terms of performing distance detection by using an ARKit framework when a video playing application plays a video.
1. When the terminal plays videos through the video application, if the played current videos are children videos, the eye protection mode can be automatically started, and if the videos are not children videos, the eye protection mode is started if the user clicks the operation of the control corresponding to the eye protection mode on the 'setting' interface.
2. In the eye protection mode, the timer calculates the time length, when the preset time length is reached, the ARSession is started, the distance between the face of the user and the terminal screen is detected, and the current distance between the user and the terminal screen is obtained.
3. The terminal judges whether the current distance between the user and the terminal screen is a safe distance, wherein the safe distance refers to the minimum distance from the face to the screen on the premise that the user does not trigger fuzzy processing on information when watching the information.
4. And if the current distance between the user and the terminal screen is in the safe distance, stopping detecting the distance between the face of the user and the terminal screen by using the ARSession, enabling the ARSession to enter a sleep mode, returning to the step 2, starting a timer to calculate the time length, and starting the ARSession to detect the distance between the face of the user and the terminal screen until the timer determines that the preset time length is reached.
5. If the current distance between the user and the terminal screen is not located at the safe distance, the current image of the video is subjected to fuzzy processing and the video is paused, the distance between the face of the user and the terminal screen is continuously detected by using the ARSession, the current distance between the face of the user and the terminal screen is dynamically adjusted (i.e. adjusted back) according to the detected distance, the step 3 is returned, until the current distance between the user and the terminal screen is located at the safe distance, the current video displayed in the terminal screen is restored to be clearly displayed, the current video is continuously played, and the step 2 is returned.
6. And if receiving the operation corresponding to the eye protection mode closing and stopping playing the video, exiting the eye protection mode.
As shown in fig. 9, in one embodiment, a control device is provided, which may be integrated in the terminal 110, and specifically may include a distance detection module 902, a distance adjustment prompting module 904, and a stop detection and return module 906.
A distance detection module 902, configured to detect, when a terminal screen of the terminal is in an information display state, a distance between a face of a user and the terminal screen by using a tracking ranging system, so as to obtain a current distance between the user and the terminal screen;
a distance adjustment prompting module 904, configured to prompt the user to adjust a distance to the terminal screen when a current distance between the user and the terminal screen is less than or equal to a first preset distance;
and a detection stopping and returning module 906, configured to, after the user is prompted to adjust the distance to the terminal screen and when it is monitored that the current distance between the user and the terminal screen is greater than a first preset distance, stop the step of detecting the distance between the face of the user and the terminal screen by using the tracking and ranging system, and return to perform the step of detecting the distance between the face of the user and the terminal screen by using the tracking and ranging system until a preset time length is reached.
In one embodiment, the control device further comprises: and the calling module is used for continuously calling the tracking ranging system to detect the distance between the face of the user and the terminal screen when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance.
In one embodiment, prompting the user to adjust the distance to the terminal screen includes processing current information displayed in the terminal screen, and the processing of the current information displayed in the terminal screen includes at least one of: the method comprises the steps of flashing current information displayed in a terminal screen, segmenting the current information displayed in the terminal screen, blurring the current information displayed in the terminal screen, deforming the current information displayed in the terminal screen and adding an interference element to the current information displayed in the terminal screen.
In one embodiment, the apparatus further comprises: and the recovery module is used for recovering the current information displayed in the terminal screen to be clearly displayed when the current distance between the user and the terminal screen is monitored to be greater than a first preset distance after the current information displayed in the terminal screen is processed.
In one embodiment, the current information displayed in the terminal screen includes an image, the distance adjustment prompting module is configured to perform a blurring process on the current information displayed in the terminal screen, and the distance adjustment prompting module 904 includes:
and the reference pixel point acquisition unit is used for acquiring a current image displayed in the terminal screen, acquiring current pixel points in the current image, and acquiring a plurality of reference pixel points in a preset range according to the positions of the current pixel points.
And the fuzzy pixel value calculating unit is used for calculating to obtain a fuzzy pixel value according to the pixel values of the plurality of reference pixel points, and the fuzzy pixel value is used as the pixel value of the current pixel point in the fuzzy image obtained by fuzzy processing.
In one embodiment, the reference pixel point obtaining unit is configured to:
and acquiring a target reference pixel distance corresponding to the current pixel point according to the current distance between the user and the terminal screen, wherein the target reference pixel distance and the current distance between the user and the terminal screen are in a negative correlation relationship.
And taking the pixel point with the distance from the current pixel point smaller than the target reference pixel distance as a reference pixel point.
In one embodiment, the current information displayed in the terminal screen includes text, and prompting the user to adjust the distance from the terminal screen includes at least one of: the method comprises the steps of flashing current characters displayed in a terminal screen, segmenting the current characters displayed in the terminal screen, deforming the current characters displayed in the terminal screen and adding interference elements to the current characters displayed in the terminal screen.
In one embodiment, the distance adjustment prompt module 904 is configured to: determining a corresponding target fuzzy processing parameter according to the current distance between the user and the terminal screen, and prompting the user to adjust the distance between the user and the terminal screen according to the target fuzzy processing parameter, wherein the fuzziness corresponding to the target fuzzy processing parameter is in a negative correlation relation with the current distance between the user and the terminal screen.
In one embodiment, the distance detection module 902 comprises:
the terminal comprises an attribute information acquisition unit, a display unit and a display unit, wherein the attribute information acquisition unit is used for acquiring target attribute information corresponding to current information displayed in a terminal screen when the terminal screen of the terminal is in an information display state, and the target attribute information comprises attribute information obtained by classifying according to the age of a viewing object corresponding to the current information;
the terminal comprises a quick opening mode entering unit, a quick opening prompting unit and a prompt unit, wherein the quick opening mode entering unit is used for entering an eye protection mode quick opening mode when target attribute information meets a preset type, and the terminal automatically opens the eye protection mode or automatically displays eye protection mode opening prompt information in the eye protection mode quick opening mode;
and the distance detection unit is used for detecting the distance between the face of the user and the terminal screen by using a tracking ranging system in the terminal when the eye protection mode is started to obtain the current distance between the user and the terminal screen.
In one embodiment, the distance detection module 902 is configured to:
opening a session control object, wherein the session control object is configured with configuration information corresponding to face tracking;
the conversation control object controls a camera to acquire images and acquire image frames;
the method comprises the steps of detecting a face of an image frame, setting a virtual anchor point on the face when the face appears in the image frame, tracking the face according to the virtual anchor point, and determining the current distance between a user and a terminal screen according to the current position of the tracked face and the position of a camera.
In one embodiment, the control device further comprises:
and the stop and continuous playing control module is used for automatically stopping playing the current information played in the terminal screen when the current distance between the user and the terminal screen is smaller than or equal to a second preset distance, and starting to continuously play the current information from the position where the playing is stopped until the current distance between the user and the terminal screen is monitored to be larger than the second preset distance.
In one embodiment, the control device is further configured to:
acquiring a service implementation module corresponding to the control device, wherein the service implementation module comprises a service interface, a target submodule set for implementing the control device and submodule operation logic;
associating the service implementation module with an information display module of an application program in the terminal;
when the information display module runs, the information display module calls a service interface and triggers a target sub-module of the service implementation module to run according to the sub-module running logic;
and if the current information displayed in the terminal screen is subjected to the fuzzy processing, the current information obtained by the fuzzy processing is sent to the information display module through the service interface, and the current information obtained by the fuzzy processing is displayed by the information display module.
In one embodiment, the control device further comprises: and the reminding module is used for calling the voice reminding interface to play voice reminding information when the current distance between the user and the terminal screen is less than or equal to a first preset distance, and the voice reminding information comprises eye protection reminding information.
FIG. 10 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the terminal 110 in fig. 1. As shown in fig. 10, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the control method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform the control method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the control apparatus provided herein may be implemented in the form of a computer program that is executable on a computer device such as that shown in fig. 10. The memory of the computer device may store various program modules constituting the control apparatus, such as a distance detection module 902, a distance adjustment prompting module 904, and a stop detection and return module 906 shown in fig. 9. The computer program constituted by the respective program modules causes the processor to execute the steps in the control method of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 10 may detect, by using the tracking and ranging system, a distance between a face of a user and a terminal screen of the terminal when the terminal screen of the terminal is in an information display state through the distance detection module 902 in the control apparatus shown in the figure, so as to obtain a current distance between the user and the terminal screen; when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance, the distance adjustment prompting module 904 prompts the user to adjust the distance between the user and the terminal screen; after the detection and return stopping module 906 prompts the user to adjust the distance between the user and the terminal screen, when the current distance between the user and the terminal screen is monitored to be greater than a first preset distance, the detection of the distance between the face of the user and the terminal screen by using the tracking distance measuring system is stopped, and the step of detecting the distance between the face of the user and the terminal screen by using the tracking distance measuring system is returned when the terminal screen of the terminal is in the information display state until the preset time length is reached.
In an embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the above-described control method. The steps of the control method here may be steps in the control methods of the respective embodiments described above.
In one embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when executed by a processor, causes the processor to carry out the steps of the above-mentioned control method. The steps of the control method here may be steps in the control methods of the respective embodiments described above.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (24)

1. A method of controlling, the method comprising:
when a terminal screen of a terminal is in an information display state and an eye protection mode is started, detecting the distance between the face of a user and the terminal screen by using a tracking ranging system to obtain the current distance between the user and the terminal screen, wherein the eye protection mode is started according to target attribute information corresponding to current information displayed in the terminal screen, the target attribute information comprises attribute information obtained by classifying according to the age of a viewing object corresponding to the current information, and the viewing object is an object for viewing information;
when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance, prompting the user to adjust the distance between the user and the terminal screen;
when the current distance between the user and the terminal screen is monitored to be larger than a first preset distance after the user is prompted to adjust the distance between the user and the terminal screen, stopping the step of detecting the distance between the face of the user and the terminal screen by using the tracking ranging system, and returning to the step of detecting the distance between the face of the user and the terminal screen by using the tracking ranging system until the preset time length is reached;
the current information displayed in the terminal screen includes an image, the prompting the user to adjust the distance to the terminal screen includes blurring the current image displayed in the terminal screen, and the blurring the current image displayed in the terminal screen includes:
acquiring a current image displayed in the terminal screen, acquiring a current pixel point in the current image, and acquiring a target reference pixel distance corresponding to the current pixel point according to the current distance between the user and the terminal screen, wherein the target reference pixel distance and the current distance between the user and the terminal screen are in a negative correlation relationship;
taking the pixel point with the distance from the current pixel point smaller than the distance from the target reference pixel point as a reference pixel point;
and calculating to obtain a fuzzy pixel value according to the pixel value of the reference pixel point, and taking the fuzzy pixel value as the pixel value of the current pixel point in a fuzzy image obtained by fuzzy processing.
2. The method of claim 1, further comprising:
and when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance, continuing to call the tracking distance measuring system to detect the distance between the face of the user and the terminal screen.
3. The method of claim 1, wherein prompting the user to adjust the distance from the terminal screen further comprises at least one of:
the method comprises the steps of carrying out flicker processing on current information displayed in the terminal screen, segmenting the current information displayed in the terminal screen, deforming the current information displayed in the terminal screen and adding interference elements on the current information displayed in the terminal screen.
4. The method according to claim 1, wherein the tracking ranging system is a system for implementing an augmented reality function, and when detecting a distance between a face of a user and the terminal screen, the configuration corresponding to face tracking in the tracking ranging system is turned on, and the configuration for establishing a correspondence between a real world and a virtual 3D coordinate space and modeling virtual content is turned off.
5. The method of claim 3, further comprising:
and after processing the current information displayed in the terminal screen, when monitoring that the current distance between the user and the terminal screen is greater than a first preset distance, the current information displayed in the terminal screen resumes clear display.
6. The method of claim 1, wherein the calculating a blurred pixel value from the pixel values of the reference pixels comprises:
and carrying out weighted summation according to the pixel value of the reference pixel point and the weight corresponding to the reference pixel point to obtain a fuzzy pixel value, wherein the weight corresponding to the reference pixel point is determined according to the distance between the reference pixel point and the current pixel point.
7. The method of claim 1, wherein the current information displayed in the terminal screen comprises text, wherein prompting the user to adjust the distance from the terminal screen comprises blurring the current text displayed in the terminal screen, wherein blurring the current text displayed in the terminal screen comprises:
determining a corresponding target fuzzy processing parameter according to the current distance between the user and the terminal screen, and carrying out fuzzy processing on the current characters displayed in the terminal screen according to the target fuzzy processing parameter, wherein the fuzziness corresponding to the target fuzzy processing parameter is in a negative correlation relation with the current distance between the user and the terminal screen.
8. The method of claim 1, wherein when the terminal screen of the terminal is in the information display state and the eye protection mode is turned on, detecting a distance between a face of a user and the terminal screen by using a tracking ranging system, and obtaining the current distance between the user and the terminal screen comprises:
when a terminal screen of a terminal is in an information display state, acquiring target attribute information corresponding to current information displayed in the terminal screen;
when the target attribute information meets a preset type, entering an eye protection mode quick start mode, wherein in the eye protection mode quick start mode, the terminal automatically starts the eye protection mode or automatically displays eye protection mode start prompt information;
when the eye protection mode is started, the distance between the face of the user and the terminal screen is detected by utilizing a tracking ranging system, and the current distance between the user and the terminal screen is obtained.
9. The method of claim 1, wherein the detecting the distance between the face of the user and the terminal screen by using the tracking ranging system to obtain the current distance between the user and the terminal screen comprises:
opening a session control object, wherein the session control object is configured with configuration information corresponding to face tracking;
the conversation control object controls a camera to acquire images and acquire image frames;
and performing face detection on the image frame, setting a virtual anchor point on a face when the face appears in the image frame, performing face tracking according to the virtual anchor point, and determining the current distance between the user and the terminal screen according to the current position of the tracked face and the position of the camera.
10. The method of claim 1, further comprising:
and when the current distance between the user and the terminal screen is smaller than or equal to a second preset distance, automatically stopping playing the current information played in the terminal screen, and continuing playing the current information from the position where the playing is stopped until the current distance between the user and the terminal screen is monitored to be larger than the second preset distance.
11. The method of claim 1, wherein prompting the user to adjust the distance from the terminal screen comprises:
and when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance, calling a voice reminding interface to play voice reminding information, wherein the voice reminding information comprises eye protection reminding information.
12. A control device, the device comprising:
the system comprises a distance detection module, a distance detection module and a distance display module, wherein the distance detection module is used for detecting the distance between the face of a user and a terminal screen by using a tracking ranging system when the terminal screen of the terminal is in an information display state and an eye protection mode is started to obtain the current distance between the user and the terminal screen, the eye protection mode is started according to target attribute information corresponding to current information displayed in the terminal screen, and the target attribute information comprises attribute information obtained by classifying according to the age of a viewing object corresponding to the current information;
the distance adjustment prompting module is used for prompting the user to adjust the distance between the user and the terminal screen when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance;
the detection stopping and returning module is used for stopping the step of detecting the distance between the face of the user and the terminal screen by using the tracking ranging system when the current distance between the user and the terminal screen is monitored to be greater than a first preset distance after the user is prompted to adjust the distance between the user and the terminal screen, and returning to the step of detecting the distance between the face of the user and the terminal screen by using the tracking ranging system until the preset time length is reached;
the current information displayed in the terminal screen comprises an image, the distance adjustment prompting module is used for carrying out fuzzy processing on the current image displayed in the terminal screen, and the distance adjustment prompting module comprises:
a reference pixel point obtaining unit, configured to obtain a current image displayed in the terminal screen, obtain a current pixel point in the current image, and obtain a target reference pixel distance corresponding to the current pixel point according to a current distance between the user and the terminal screen; taking the pixel point with the distance from the current pixel point smaller than the target reference pixel distance as a reference pixel point, wherein the target reference pixel distance and the current distance between the user and the terminal screen form a negative correlation relationship;
and the fuzzy pixel value calculating unit is used for calculating to obtain a fuzzy pixel value according to the pixel value of the reference pixel point, and taking the fuzzy pixel value as the pixel value of the current pixel point in the fuzzy image obtained by fuzzy processing.
13. The apparatus of claim 12, further comprising:
and the calling module is used for continuously calling the tracking ranging system to detect the distance between the face of the user and the terminal screen when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance.
14. The apparatus of claim 12, wherein the prompting the user to adjust the distance from the terminal screen further comprises at least one of:
the method comprises the steps of carrying out flicker processing on current information displayed in the terminal screen, segmenting the current information displayed in the terminal screen, deforming the current information displayed in the terminal screen and adding interference elements on the current information displayed in the terminal screen.
15. The apparatus according to claim 12, wherein the tracking ranging system is a system for implementing an augmented reality function, and when detecting a distance between a face of a user and the terminal screen, the configuration corresponding to face tracking in the tracking ranging system is turned on, and the configuration for establishing a correspondence between a real world and a virtual 3D coordinate space and modeling virtual content is turned off.
16. The apparatus of claim 12, further comprising:
and the recovery module is used for recovering the current information displayed in the terminal screen to be clearly displayed when the current distance between the user and the terminal screen is monitored to be larger than a first preset distance after the current information displayed in the terminal screen is processed.
17. The apparatus of claim 12, wherein the blurred pixel value calculation unit is configured to:
and carrying out weighted summation according to the pixel value of the reference pixel point and the weight corresponding to the reference pixel point to obtain a fuzzy pixel value, wherein the weight corresponding to the reference pixel point is determined according to the distance between the reference pixel point and the current pixel point.
18. The apparatus according to claim 12, wherein the current information displayed in the terminal screen includes text, and the distance adjustment prompting module is further configured to perform a blurring process on the current text displayed in the terminal screen, specifically:
determining a corresponding target fuzzy processing parameter according to the current distance between the user and the terminal screen, and carrying out fuzzy processing on the current characters displayed in the terminal screen according to the target fuzzy processing parameter, wherein the fuzziness corresponding to the target fuzzy processing parameter is in a negative correlation relation with the current distance between the user and the terminal screen.
19. The apparatus of claim 12, wherein the distance detection module comprises:
the terminal comprises an attribute information acquisition unit, a display unit and a display unit, wherein the attribute information acquisition unit is used for acquiring target attribute information corresponding to current information displayed in a terminal screen when the terminal screen of the terminal is in an information display state;
the terminal comprises a quick opening mode entering unit, a quick opening mode display unit and a prompt unit, wherein the quick opening mode entering unit is used for entering an eye protection mode quick opening mode when the target attribute information meets a preset type, and the terminal automatically opens the eye protection mode or automatically displays eye protection mode opening prompt information under the eye protection mode quick opening mode;
and the distance detection unit is used for detecting the distance between the face of the user and the terminal screen by utilizing a tracking ranging system when the eye protection mode is started, so as to obtain the current distance between the user and the terminal screen.
20. The apparatus of claim 12, wherein the distance detection module is configured to:
opening a session control object, wherein the session control object is configured with configuration information corresponding to face tracking;
the conversation control object controls a camera to acquire images and acquire image frames;
and performing face detection on the image frame, setting a virtual anchor point on a face when the face appears in the image frame, performing face tracking according to the virtual anchor point, and determining the current distance between the user and the terminal screen according to the current position of the tracked face and the position of the camera.
21. The apparatus of claim 12, further comprising:
and the stop and continuous playing control module is used for automatically stopping playing the current information played in the terminal screen when the current distance between the user and the terminal screen is smaller than or equal to a second preset distance, and starting to continuously play the current information from the position where the playing is stopped until the current distance between the user and the terminal screen is monitored to be larger than the second preset distance.
22. The apparatus of claim 12, wherein the distance adjustment prompting module is configured to:
and when the current distance between the user and the terminal screen is smaller than or equal to a first preset distance, calling a voice reminding interface to play voice reminding information, wherein the voice reminding information comprises eye protection reminding information.
23. A computer arrangement, characterized by comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the control method of any one of claims 1 to 11.
24. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, causes the processor to carry out the steps of the control method according to any one of claims 1 to 11.
CN201811458612.XA 2018-11-30 2018-11-30 Control method, control device, computer equipment and storage medium Active CN109558008B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811458612.XA CN109558008B (en) 2018-11-30 2018-11-30 Control method, control device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811458612.XA CN109558008B (en) 2018-11-30 2018-11-30 Control method, control device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109558008A CN109558008A (en) 2019-04-02
CN109558008B true CN109558008B (en) 2020-10-27

Family

ID=65868462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811458612.XA Active CN109558008B (en) 2018-11-30 2018-11-30 Control method, control device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109558008B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119630A (en) * 2019-05-06 2019-08-13 珠海格力电器股份有限公司 Control method, device, storage medium and the processor of electronic equipment
CN110390745B (en) * 2019-06-03 2022-04-08 浙江大华技术股份有限公司 Gate control method, system, readable storage medium and device
CN111182388A (en) * 2019-12-02 2020-05-19 广东小天才科技有限公司 Screen display control method based on intelligent sound box and intelligent sound box
CN110897604A (en) * 2019-12-26 2020-03-24 深圳市博盛医疗科技有限公司 Laparoscope system for reducing three-dimensional distortion in 3D vision and use method
CN111158481B (en) * 2019-12-27 2021-10-26 腾讯科技(深圳)有限公司 Prompting method and device and computer readable storage medium
CN111429519B (en) * 2020-03-27 2021-07-16 贝壳找房(北京)科技有限公司 Three-dimensional scene display method and device, readable storage medium and electronic equipment
CN111586459B (en) * 2020-05-22 2022-10-14 北京百度网讯科技有限公司 Method and device for controlling video playing, electronic equipment and storage medium
CN112616046A (en) * 2020-12-15 2021-04-06 青岛海信激光显示股份有限公司 Laser projection equipment and prompting method thereof
CN114253623B (en) * 2021-11-19 2024-01-19 惠州Tcl移动通信有限公司 Screen amplification processing method and device based on mobile terminal, terminal and medium
CN115356907A (en) * 2022-08-23 2022-11-18 四川长虹电器股份有限公司 Method for detecting eyesight through child watch
CN115526809B (en) * 2022-11-04 2023-03-10 山东捷瑞数字科技股份有限公司 Image processing method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1467987A (en) * 2002-06-27 2004-01-14 ŵ��ʿ�ֻ���ʽ���� Image processing method, image processing program and memory media storing the program
CN103024163A (en) * 2012-12-03 2013-04-03 广东欧珀移动通信有限公司 Method and system for protecting eyesight and mobile terminal
CN104052871A (en) * 2014-05-27 2014-09-17 上海电力学院 Eye protecting device and method for mobile terminal
CN104932849A (en) * 2014-03-21 2015-09-23 海信集团有限公司 Application scenario setting method, device and system
CN105357631A (en) * 2015-11-23 2016-02-24 东莞酷派软件技术有限公司 Terminal use time control method and associated equipment
CN105472174A (en) * 2016-01-29 2016-04-06 四川工业科技学院 Intelligent eye protecting method achieved by controlling distance between mobile terminal and eyes
CN107454250A (en) * 2017-07-07 2017-12-08 广东小天才科技有限公司 Use eye care method during mobile terminal, device, mobile terminal and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100899A (en) * 2015-08-19 2015-11-25 上海斐讯数据通信技术有限公司 System and method for monitoring television watching mode by utilizing client
CN107256084A (en) * 2017-05-24 2017-10-17 段杰 Pre- myopic-preventing processing method and its terminal
CN107479694A (en) * 2017-07-14 2017-12-15 广东欧珀移动通信有限公司 A kind of sight protectio method, apparatus, storage medium and mobile terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1467987A (en) * 2002-06-27 2004-01-14 ŵ��ʿ�ֻ���ʽ���� Image processing method, image processing program and memory media storing the program
CN103024163A (en) * 2012-12-03 2013-04-03 广东欧珀移动通信有限公司 Method and system for protecting eyesight and mobile terminal
CN104932849A (en) * 2014-03-21 2015-09-23 海信集团有限公司 Application scenario setting method, device and system
CN104052871A (en) * 2014-05-27 2014-09-17 上海电力学院 Eye protecting device and method for mobile terminal
CN105357631A (en) * 2015-11-23 2016-02-24 东莞酷派软件技术有限公司 Terminal use time control method and associated equipment
CN105472174A (en) * 2016-01-29 2016-04-06 四川工业科技学院 Intelligent eye protecting method achieved by controlling distance between mobile terminal and eyes
CN107454250A (en) * 2017-07-07 2017-12-08 广东小天才科技有限公司 Use eye care method during mobile terminal, device, mobile terminal and storage medium

Also Published As

Publication number Publication date
CN109558008A (en) 2019-04-02

Similar Documents

Publication Publication Date Title
CN109558008B (en) Control method, control device, computer equipment and storage medium
US10929982B2 (en) Face pose correction based on depth information
CN113259592B (en) Shooting method and device, electronic equipment and storage medium
CN112541400A (en) Behavior recognition method and device based on sight estimation, electronic equipment and storage medium
CN112291480B (en) Tracking focusing method, tracking focusing device, electronic device and readable storage medium
WO2019091487A1 (en) Image photographing method and device, terminal, and storage medium
CN112511743B (en) Video shooting method and device
CN112887615A (en) Shooting method and device
CN117115287A (en) Image generation method, device, electronic equipment and readable storage medium
CN111489284B (en) Image processing method and device for image processing
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN111967436B (en) Image processing method and device
WO2023044233A1 (en) Region of interest capture for electronic devices
CN114745767A (en) Power consumption control method and device for electronic equipment, electronic equipment and storage medium
CN116361761A (en) Information shielding method, information shielding device and electronic equipment
CN112738398A (en) Image anti-shake method and device and electronic equipment
CN112367470B (en) Image processing method and device and electronic equipment
CN115426505B (en) Preset expression special effect triggering method based on face capture and related equipment
CN116347009B (en) Video generation method and electronic equipment
CN112367562B (en) Image processing method and device and electronic equipment
CN114004922B (en) Bone animation display method, device, equipment, medium and computer program product
CN117528179A (en) Video generation method and device
CN116017146A (en) Image processing method and device
CN113727074A (en) Monitoring information prompting method and device and electronic equipment
CN115242976A (en) Shooting method, shooting device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant