CN107770476B - Method for automatically reducing video window, mobile terminal and storage device - Google Patents

Method for automatically reducing video window, mobile terminal and storage device Download PDF

Info

Publication number
CN107770476B
CN107770476B CN201711017552.3A CN201711017552A CN107770476B CN 107770476 B CN107770476 B CN 107770476B CN 201711017552 A CN201711017552 A CN 201711017552A CN 107770476 B CN107770476 B CN 107770476B
Authority
CN
China
Prior art keywords
mobile terminal
user
video window
information
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711017552.3A
Other languages
Chinese (zh)
Other versions
CN107770476A (en
Inventor
邵珠剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yibin Tianlong Communication Co.,Ltd.
Shenzhen Tinno Mobile Technology Co Ltd
Shenzhen Tinno Wireless Technology Co Ltd
Original Assignee
Shenzhen Tinno Mobile Technology Co Ltd
Shenzhen Tinno Wireless Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tinno Mobile Technology Co Ltd, Shenzhen Tinno Wireless Technology Co Ltd filed Critical Shenzhen Tinno Mobile Technology Co Ltd
Priority to CN201711017552.3A priority Critical patent/CN107770476B/en
Publication of CN107770476A publication Critical patent/CN107770476A/en
Application granted granted Critical
Publication of CN107770476B publication Critical patent/CN107770476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Telephone Function (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a method for automatically reducing a video window, a mobile terminal and a storage device. The method comprises the following steps: when a user carries out video call through a mobile terminal, acquiring image information around the mobile terminal every preset time, wherein the periphery of the mobile terminal is a viewing range of a front camera of the mobile terminal; judging whether a third person except the user joins the periphery of the mobile terminal or not according to the image information; and if a third person except the user joins the periphery of the mobile terminal, automatically reducing the video window to remind the user. By the mode, the safety of the video call of the user can be improved.

Description

Method for automatically reducing video window, mobile terminal and storage device
Technical Field
The present invention relates to the field of mobile terminals, and in particular, to a method for automatically reducing a video window, a mobile terminal, and a storage device.
Background
With the continuous progress of science and technology, functions which can be realized by the mobile terminal are more and more, and the mobile terminal gradually develops into an information comprehensive processing platform and is deeply loved by users in daily life.
The video call function is a function frequently used by a user in the mobile terminal, but in the process of actual application, the privacy is leaked and the user is unknown.
For example, when a user makes a video call in a public place such as a bus, a subway, or the like, people around the user may pay attention to video information that the user is calling, thereby causing privacy to be revealed.
Therefore, how to improve the security of the video call of the user is an urgent problem to be solved.
Disclosure of Invention
In view of the above, the present invention provides a method, a mobile terminal and a storage device for automatically reducing a video window, which can improve the security of a video call of a user.
In order to solve the technical problems, the invention adopts a technical scheme that: there is provided a method of automatically reducing a video window, the method comprising: when a user carries out video call through a mobile terminal, acquiring image information around the mobile terminal every preset time, wherein the periphery of the mobile terminal is a viewing range of a front camera of the mobile terminal; judging whether a third person except the user joins the periphery of the mobile terminal or not according to the image information; and if a third person except the user joins the periphery of the mobile terminal, automatically reducing the video window to remind the user.
In order to solve the technical problem, the invention adopts another technical scheme that: the mobile terminal comprises a processor and a camera circuit which are coupled with each other, wherein the processor is matched with the camera circuit to realize the following method for automatically reducing the video window when in work, and the method comprises the following steps:
when a user carries out video call through a mobile terminal, acquiring image information around the mobile terminal every preset time, wherein the periphery of the mobile terminal is a viewing range of a front camera of the mobile terminal; judging whether a third person except the user joins the periphery of the mobile terminal or not according to the image information; and if a third person except the user joins the periphery of the mobile terminal, automatically reducing the video window to remind the user.
In order to solve the technical problem, the invention adopts another technical scheme that: there is provided a storage device storing program data executable to implement a method of automatically reducing a video window, the method comprising:
when a user carries out video call through a mobile terminal, acquiring image information around the mobile terminal every preset time, wherein the periphery of the mobile terminal is a viewing range of a front camera of the mobile terminal; judging whether a third person except the user joins the periphery of the mobile terminal or not according to the image information; and if a third person except the user joins the periphery of the mobile terminal, automatically reducing the video window to remind the user.
The invention has the beneficial effects that: according to the invention, when a user carries out video call through the mobile terminal, image information around the mobile terminal is obtained at preset time intervals; judging whether a third person except the user joins the periphery of the mobile terminal or not according to the image information; and if a third person except the user joins the periphery of the mobile terminal, automatically reducing the video window to remind the user. By the mode, the method and the device can reduce the probability of privacy disclosure during video call of the user, so that the purpose of improving the safety of the video call of the user is achieved.
Drawings
FIG. 1 is a flow chart of a method for automatically reducing a video window according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a method for automatically reducing a video window according to a second embodiment of the present invention;
FIG. 3 is a flowchart of a method for automatically reducing a video window according to a third embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a memory device according to an embodiment of the invention.
Detailed Description
Where certain terms are used throughout the description and claims to refer to particular components, those skilled in the art will appreciate that manufacturers may refer to the same components by different names. In the present specification and claims, the difference in name is not used as a means for distinguishing between components, but a difference in function of a component is used as a reference for distinguishing between components. The present invention will be described in detail below with reference to the accompanying drawings and examples.
Fig. 1 is a flowchart of a method for automatically reducing a video window according to a first embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 1 if the results are substantially the same. As shown in fig. 1, the method comprises the steps of:
step S101: when a user carries out video call through the mobile terminal, image information around the mobile terminal is acquired at preset time intervals.
In step S101, the mobile terminal may include a device such as a mobile phone, a smart phone, a notebook computer, a Personal Digital Assistant (PDA), a tablet computer (PAD), and the like. The periphery of the mobile terminal is a view finding range of a front camera of the mobile terminal. In the embodiment, when a user carries out a video call through the mobile terminal, image information within a viewing range of the front camera is acquired by the front camera of the mobile terminal every predetermined time.
Step S102: and judging whether a third person except the user joins the periphery of the mobile terminal or not according to the image information, if so, executing step S103, and otherwise, executing step S104.
In step S102, when the number of faces recognizable in the image information is more than two, it is determined that a third person other than the user joins the periphery of the mobile terminal.
Step S103: the video window is automatically reduced to alert the user.
In step S103, when it is determined in step S102 that a third person other than the user is present around the mobile terminal, the video window is automatically narrowed to remind the user that the third person other than the user is watching the video call content secretly and to prevent the third person other than the user from continuing watching the video call content.
Preferably, while performing an operation of automatically reducing the video window to remind the user, an operation of saving image information around the mobile terminal in an album of the mobile terminal may be performed, so that a subsequent user can know who a third person other than the user is.
Step S104: the video window is kept unchanged.
In step S104, when it is determined in step S102 that no third person other than the user is present around the mobile terminal, that is, only the user around the mobile terminal is watching the video window of the mobile terminal, the video window is kept unchanged.
The method and the device have the advantages that when a user carries out video call through the mobile terminal, the image information around the mobile terminal is acquired at preset time intervals; judging whether a third person except the user joins the periphery of the mobile terminal or not according to the image information; and if a third person except the user joins the periphery of the mobile terminal, automatically reducing the video window to remind the user. By the mode, the method and the device can reduce the probability of privacy disclosure during video call of the user, so that the purpose of improving the safety of the video call of the user is achieved.
Fig. 2 is a flowchart of a method for automatically reducing a video window according to a second embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 2 if the results are substantially the same. As shown in fig. 2, the method comprises the steps of:
step S201: when a user carries out video call through the mobile terminal, image information around the mobile terminal is acquired at preset time intervals.
In step S201, the mobile terminal may include a device such as a mobile phone, a smart phone, a notebook computer, a Personal Digital Assistant (PDA), a tablet computer (PAD), and the like. The periphery of the mobile terminal is a view finding range of a front camera of the mobile terminal.
In the embodiment, when a user carries out a video call through the mobile terminal, image information within a viewing range of the front camera is acquired by the front camera of the mobile terminal every predetermined time.
Step S202: and extracting the biological characteristic information of the human face from the image information.
In step S202, the biometric information of the human face may include face shape information, size information, pupil information, facial feature information, and other information with identification, where the facial feature information may include facial features such as double eyelids, bang, moles, and the like.
It can be understood by those skilled in the art that if only a user exists around the mobile terminal, only the biometric information of the face of the user is extracted from the image information, and if a third person other than the user exists around the mobile terminal, the biometric information of the face of the user and the face of the third person other than the user are simultaneously extracted from the image information.
Step S203: and judging whether the extracted biological feature information of the face is matched with the preset biological feature information, if not, executing the step S204, otherwise, executing the step S205.
In step S203, the preset biometric information is obtained as follows: the method comprises the steps of obtaining image information of a face of a user, extracting biological feature information of the face of the user from the image information of the face of the user, and storing the biological feature information in a mobile terminal to form preset biological feature information.
In this embodiment, the preset biometric information may include facial features of the user different from those of the ordinary person, such as: the double-edged eyelid, the bang, the nevus and the like, so that whether the extracted biological characteristic information of the face is the biological characteristic information of the face of the user can be judged more quickly.
In this embodiment, it is determined whether the extracted biometric information of the face matches the preset biometric information, which is substantially determined whether a third person other than the user is within the viewing range of the front camera. That is to say, when the extracted biological feature information of the face is not matched with the preset biological feature information, it is determined that a third person other than the user joins the periphery of the mobile terminal; and when the extracted biological characteristic information of the face is matched with the preset biological characteristic information, determining that only the user is around the mobile terminal.
In this embodiment, a specific method for determining whether the extracted biometric information of the human face matches the preset biometric information may be a matching algorithm based on image features, a matching algorithm based on image gray, and the like.
Step S204: the video window is automatically reduced to alert the user.
In step S204, when it is determined in step S203 that the extracted biometric information of the face is not matched with the preset biometric information, that is, when a third person other than the user watches the video window of the mobile terminal around the mobile terminal, the video window is automatically reduced to remind the user.
Step S205: the video window is kept unchanged.
In step S205, when it is determined in step S203 that the extracted biometric information of the face matches the preset biometric information, that is, only the user around the mobile terminal is watching the video window of the mobile terminal, the video window is kept unchanged.
The method and the device have the advantages that when a user carries out video call through the mobile terminal, the image information around the mobile terminal is acquired at preset time intervals; extracting biological characteristic information of the human face according to the image information; judging whether the extracted biological feature information of the human face is matched with preset biological feature information or not; if not, the video window is automatically reduced to remind the user. By the mode, the method and the device can reduce the probability of privacy disclosure during video call of the user, so that the purpose of improving the safety of the video call of the user is achieved.
Fig. 3 is a flowchart of a method for automatically reducing a video window according to a third embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 3 if the results are substantially the same. As shown in fig. 3, the method comprises the steps of:
step S301: and providing a setting interface to enable a user to input parameters corresponding to the video window shrinking function on the setting interface.
In step S301, the video window shrinking function provides a setting interface for inputting parameters corresponding to the video window shrinking function, where the parameters include a preset focusing time, a preset size and a preset position of the shrunk video window, and preset biometric information.
In this embodiment, the zoom out video window function exists in the form of a background process that once turned on will run in the background. The video call function and the video window reduction function of the mobile terminal can be mutually independent and need to be respectively started; or automatically starting the function of reducing the video window after starting the video call function.
Step S302: when a user carries out video call through the mobile terminal, image information around the mobile terminal is acquired at preset time intervals.
In step S302, the mobile terminal may include a device such as a mobile phone, a smart phone, a notebook computer, a Personal Digital Assistant (PDA), a tablet computer (PAD), and the like. The periphery of the mobile terminal is a view finding range of a front camera of the mobile terminal.
In the embodiment, when a user carries out a video call through the mobile terminal, image information within a viewing range of the front camera is acquired by the front camera of the mobile terminal every predetermined time.
Step S303: and extracting the biological characteristic information of the human face from the image information.
In step S303, the biometric information of the human face may include face shape information, size information, pupil information, facial feature information, and other information with identification, where the facial feature information may include facial features such as double eyelids, bang, moles, and the like.
It can be understood by those skilled in the art that if only a user exists around the mobile terminal, only the biometric information of the face of the user is extracted from the image information, and if a third person other than the user exists around the mobile terminal, the biometric information of the face of the user and the face of the third person other than the user are simultaneously extracted from the image information.
Step S304: and judging whether the extracted biological feature information of the human face is matched with the preset biological feature information, if not, executing the step S305, otherwise, executing the step S309.
In step S304, preset biometric information is obtained as follows: the method comprises the steps of obtaining image information of a face of a user, extracting biological feature information of the face of the user from the image information of the face of the user, and storing the biological feature information in a mobile terminal to form preset biological feature information.
In this embodiment, the preset biometric information may include facial features of the user different from those of the ordinary person, such as: the double-edged eyelid, the bang, the nevus and the like, so that whether the extracted biological characteristic information of the face is the face of the user can be judged more quickly.
In this embodiment, it is determined whether the extracted biometric information of the face matches the preset biometric information, which is substantially determined whether a third person other than the user is within the viewing range of the front camera. That is to say, when the extracted biological feature information of the face is not matched with the preset biological feature information, it is determined that a third person other than the user joins the periphery of the mobile terminal; and when the extracted biological characteristic information of the face is matched with the preset biological characteristic information, determining that only the user is around the mobile terminal.
In this embodiment, a specific method for determining whether the extracted biometric information of the human face matches the preset biometric information may be a matching algorithm based on image features, a matching algorithm based on image gray, and the like.
Step S305: and judging whether the extracted biological feature information of the human face contains pupil information, if so, executing step S306, and otherwise, executing step S309.
In step S305, when it is determined in step S304 that the biometric information of the extracted face does not match the preset biometric information, it is determined that a third person other than the user is present around the mobile terminal, and at this time, it is further determined whether the biometric information of the extracted face includes pupil information.
In this embodiment, it is determined whether the extracted biometric information of the face has pupil information, which is substantially determined whether eyes of a third person other than the user gazes at a video window of the mobile terminal. That is, when the extracted biological feature information of the face includes pupil information, it is determined that eyes of a third person other than the user watch on a video window of the mobile terminal; and when the extracted biological characteristic information of the face does not contain pupil information, determining that eyes of a third person except the user watch on other places except the video window.
Step S306: the duration of a video window looking at the mobile terminal is obtained.
In step S306, when it is determined in step S305 that there is pupil information in the extracted biometric information of the face, the duration of the video window that is looking at the mobile terminal is further acquired.
The step of acquiring the duration of a video window gazing at the mobile terminal comprises: capturing frame images around the mobile terminal through a front camera at the frequency of n (n is less than or equal to 4) seconds; the frequency of capturing images by the front camera is set to be less than or equal to 4 seconds, and is based on that the frequency of normal human blinks is 5 seconds/time, and the error of the blinking can be reduced by capturing images at the frequency of 4 seconds or lower; analyzing the frame images, and if judging that the pupil information of a third person except the user does not exist in a certain frame image, confirming that the third person except the user does not concentrate on the video window of the mobile terminal; and finally, acquiring the frame number of the frame image without the pupil information of a third person except the user, and obtaining the duration time of the video window watching the mobile terminal according to the frame number.
Step S307: and judging whether the duration time exceeds the preset attention time, if so, executing the step S308, otherwise, executing the step S309.
In step S307, the preset focusing time may be set according to actual conditions, and may be, for example, 3 seconds, 5 seconds, or the like. In this embodiment, when the duration exceeds the preset attention time, it is determined that a third person other than the user is focused on the video window of the mobile terminal; when the duration does not exceed the preset focus time, it is determined that a third person other than the user has only accidentally seen the video window of the mobile terminal.
Step S308: the video window is automatically reduced to alert the user.
In step S208, when it is determined in step S207 that the duration exceeds the predetermined attention time, it indicates that a third person other than the user is paying attention to the video call content of the user, and at this time, the video window is automatically narrowed down to remind the user that the third person other than the user is watching the video call content secretly and to prevent the third person other than the user from continuing watching the video call content.
The operation of automatically reducing the video window specifically includes: and automatically reducing the video window according to a preset size and displaying the reduced video window at a preset position, wherein the preset size can be 1/2, 1/4 and the like of the original video window, and the preset position can be an edge position of the mobile terminal and the like.
Preferably, when it is determined in step S207 that the duration exceeds the predetermined attention time, an operation of saving image information of the periphery of the mobile terminal in an album of the mobile terminal may also be performed to facilitate a subsequent user to know who a third person other than the user is.
Step S309: the video window is kept unchanged.
In step S309, when it is determined in step S304 that the biometric information of the extracted human face matches the preset biometric information, or when it is determined in step S305 that there is no pupil information in the biometric information of the extracted human face, or when it is determined in step S307 that the duration of the video window of the mobile terminal watched by a third person other than the user does not exceed the preset attention time, it is determined that only the user is watching the video window of the mobile terminal around the mobile terminal, and the video window is kept unchanged.
The method and the device have the advantages that when a user carries out video call through the mobile terminal, the image information around the mobile terminal is acquired at preset time intervals; judging whether a third person except the user joins the periphery of the mobile terminal or not according to the image information; if a third person outside the user joins the mobile terminal, further judging whether the third person outside the user is concentrated in the video window of the mobile terminal; and if a third person except the user is concentrated in the video window of the mobile terminal, automatically reducing the video window to remind the user. By the method, the accuracy of automatically reducing the video window can be improved, the safety of video call of a user is improved, and the experience degree of the user on the video call function of the mobile terminal is guaranteed.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
In the present embodiment, the mobile terminal 300 includes a processor 21 and a camera circuit 22, and the processor 21 is coupled to the camera circuit 22. The camera circuit 22 is configured to obtain image information of the periphery of the mobile terminal, and the processor 21 is configured to respond and process according to the image information of the periphery of the mobile terminal obtained by the camera circuit 22, and implement the method for automatically reducing the video window set forth in the above embodiment.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a memory device according to an embodiment of the invention.
In the present embodiment, the storage device 400 stores the program data 401, and the program data 401 can be executed to implement the method for automatically reducing the video window described in the above embodiments, which will not be described herein again.
In the several embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are illustrative, and for example, the division of the modules or units into one logical functional division may be implemented in practice in another logical functional division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A method for automatically reducing a video window, the method comprising:
when a user carries out video call through a mobile terminal, acquiring image information around the mobile terminal every preset time, wherein the periphery of the mobile terminal is a viewing range of a front camera of the mobile terminal;
extracting biological characteristic information of a human face from the image information;
judging whether the extracted biological feature information of the human face is matched with preset biological feature information or not;
if not, judging that a third person except the user joins the periphery of the mobile terminal, and further judging whether pupil information exists in the extracted biological characteristic information of the face;
if the pupil is found, determining that a third person except the user watches the video window of the mobile terminal, and further obtaining the duration time of the video window watching the mobile terminal; wherein the step of obtaining a duration of a video window gazing at the mobile terminal comprises: capturing images around the mobile terminal by taking the frequency lower than the normal blinking frequency of human beings as the capturing frequency, judging that image information of pupil information of a third person except the user does not exist in the image information, acquiring the frame number of the image information of the pupil information of the third person except the user does not exist, and acquiring the duration time of a video window watching the mobile terminal according to the frame number;
and when the duration time exceeds the preset attention time, automatically reducing the video window to remind the user of operation.
2. The method of claim 1, wherein the step of automatically reducing the video window comprises:
and automatically reducing the video window according to a preset size and displaying the reduced video window at a preset position.
3. The method of claim 2, further comprising:
providing a setting interface so that a user inputs parameters corresponding to the video window shrinking function on the setting interface, wherein the parameters comprise the preset attention time, the preset size and the preset position corresponding to the shrunk video window and the preset biological characteristic information.
4. The method according to claim 1, wherein the preset biometric information is obtained according to the following manner:
acquiring image information of a face of a user;
and extracting the biological characteristic information of the face of the user from the image information of the face of the user and storing the biological characteristic information in the mobile terminal to form preset biological characteristic information.
5. The method according to claim 1, wherein the step of acquiring the image information of the periphery of the mobile terminal at predetermined time intervals comprises:
and acquiring image information in a framing range of a front camera by the front camera of the mobile terminal at preset time intervals.
6. The method according to claim 1, wherein if the match is found, it is determined that no third person other than the user is present around the mobile terminal, and the video window is kept unchanged.
7. A mobile terminal, comprising:
a processor and a camera circuit coupled to each other, the processor being operative to implement the method for automatically reducing a video window according to any one of claims 1 to 6 in cooperation with the camera circuit.
8. A storage device storing program data executable to implement a method of automatically reducing a video window according to any one of claims 1 to 6.
CN201711017552.3A 2017-10-25 2017-10-25 Method for automatically reducing video window, mobile terminal and storage device Active CN107770476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711017552.3A CN107770476B (en) 2017-10-25 2017-10-25 Method for automatically reducing video window, mobile terminal and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711017552.3A CN107770476B (en) 2017-10-25 2017-10-25 Method for automatically reducing video window, mobile terminal and storage device

Publications (2)

Publication Number Publication Date
CN107770476A CN107770476A (en) 2018-03-06
CN107770476B true CN107770476B (en) 2021-01-01

Family

ID=61270801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711017552.3A Active CN107770476B (en) 2017-10-25 2017-10-25 Method for automatically reducing video window, mobile terminal and storage device

Country Status (1)

Country Link
CN (1) CN107770476B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145827A (en) * 2018-08-24 2019-01-04 阿里巴巴集团控股有限公司 Video communication method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218579A (en) * 2013-03-28 2013-07-24 东莞宇龙通信科技有限公司 Method for preventing content on screen from being peeped, and mobile terminal thereof
CN104735245A (en) * 2015-02-28 2015-06-24 深圳市中兴移动通信有限公司 Mobile terminal information protection method and mobile terminal
CN104834866A (en) * 2014-02-11 2015-08-12 中兴通讯股份有限公司 Method and device for protecting privacy-sensitive information by automatically recognizing scene
CN104915012A (en) * 2015-06-29 2015-09-16 广东欧珀移动通信有限公司 Screen locking method of terminal and device
CN105389527A (en) * 2015-10-27 2016-03-09 努比亚技术有限公司 Peek prevention apparatus and method for mobile terminal
CN105472303A (en) * 2015-11-20 2016-04-06 小米科技有限责任公司 Privacy protection method and apparatus for video chatting
CN106156663A (en) * 2015-04-14 2016-11-23 小米科技有限责任公司 A kind of terminal environments detection method and device
CN106303353A (en) * 2016-08-17 2017-01-04 深圳市金立通信设备有限公司 A kind of video session control method and terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218579A (en) * 2013-03-28 2013-07-24 东莞宇龙通信科技有限公司 Method for preventing content on screen from being peeped, and mobile terminal thereof
CN104834866A (en) * 2014-02-11 2015-08-12 中兴通讯股份有限公司 Method and device for protecting privacy-sensitive information by automatically recognizing scene
CN104735245A (en) * 2015-02-28 2015-06-24 深圳市中兴移动通信有限公司 Mobile terminal information protection method and mobile terminal
CN106156663A (en) * 2015-04-14 2016-11-23 小米科技有限责任公司 A kind of terminal environments detection method and device
CN104915012A (en) * 2015-06-29 2015-09-16 广东欧珀移动通信有限公司 Screen locking method of terminal and device
CN105389527A (en) * 2015-10-27 2016-03-09 努比亚技术有限公司 Peek prevention apparatus and method for mobile terminal
CN105472303A (en) * 2015-11-20 2016-04-06 小米科技有限责任公司 Privacy protection method and apparatus for video chatting
CN106303353A (en) * 2016-08-17 2017-01-04 深圳市金立通信设备有限公司 A kind of video session control method and terminal

Also Published As

Publication number Publication date
CN107770476A (en) 2018-03-06

Similar Documents

Publication Publication Date Title
US10565763B2 (en) Method and camera device for processing image
WO2020168468A1 (en) Help-seeking method and device based on expression recognition, electronic apparatus and storage medium
WO2016180224A1 (en) Method and device for processing image of person
WO2016145830A1 (en) Image processing method, terminal and computer storage medium
US11132544B2 (en) Visual fatigue recognition method, visual fatigue recognition device, virtual reality apparatus and storage medium
US10552593B2 (en) Face verification method and electronic device
CN107832595B (en) Locking method and related equipment
US9924090B2 (en) Method and device for acquiring iris image
KR20180013208A (en) Apparatus and Method for Processing Differential Beauty Effect
CN105843681B (en) Mobile terminal and operating system switching method thereof
CN107666583B (en) Call processing method and terminal
CN111125772B (en) Method and device for dynamically setting security policy and mobile device
CN114612986A (en) Detection method, detection device, electronic equipment and storage medium
CN114501144A (en) Image-based television control method, device, equipment and storage medium
CN107808081B (en) Reminding method and related equipment
CN107770476B (en) Method for automatically reducing video window, mobile terminal and storage device
CN113552989A (en) Screen recording method and device and electronic equipment
CN107786349B (en) Security management method and device for user account
CN112202963A (en) Mobile banking peep-proof screen method and device, storage medium and electronic equipment
CN111373409A (en) Method and terminal for acquiring color value change
CN116361761A (en) Information shielding method, information shielding device and electronic equipment
CN111201772A (en) Video recording method, device and terminal
CN111985305A (en) Screen peeping prevention method and device and terminal equipment
CN113486730A (en) Intelligent reminding method based on face recognition and related device
CN111324878A (en) Identity verification method and device based on face recognition, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211009

Address after: 518053 H3 501B, east industrial area of overseas Chinese town, Nanshan District, Shenzhen, Guangdong

Patentee after: SHENZHEN TINNO WIRELESS TECHNOLOGY Co.,Ltd.

Patentee after: Tinno Mobile Technology Corp.

Patentee after: Yibin Tianlong Communication Co.,Ltd.

Address before: 518053 H3 501B, east industrial area of overseas Chinese town, Nanshan District, Shenzhen, Guangdong

Patentee before: SHENZHEN TINNO WIRELESS TECHNOLOGY Co.,Ltd.

Patentee before: Tinno Mobile Technology Corp.