Detailed Description
Where certain terms are used throughout the description and claims to refer to particular components, those skilled in the art will appreciate that manufacturers may refer to the same components by different names. In the present specification and claims, the difference in name is not used as a means for distinguishing between components, but a difference in function of a component is used as a reference for distinguishing between components. The present invention will be described in detail below with reference to the accompanying drawings and examples.
Fig. 1 is a flowchart of a method for automatically reducing a video window according to a first embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 1 if the results are substantially the same. As shown in fig. 1, the method comprises the steps of:
step S101: when a user carries out video call through the mobile terminal, image information around the mobile terminal is acquired at preset time intervals.
In step S101, the mobile terminal may include a device such as a mobile phone, a smart phone, a notebook computer, a Personal Digital Assistant (PDA), a tablet computer (PAD), and the like. The periphery of the mobile terminal is a view finding range of a front camera of the mobile terminal. In the embodiment, when a user carries out a video call through the mobile terminal, image information within a viewing range of the front camera is acquired by the front camera of the mobile terminal every predetermined time.
Step S102: and judging whether a third person except the user joins the periphery of the mobile terminal or not according to the image information, if so, executing step S103, and otherwise, executing step S104.
In step S102, when the number of faces recognizable in the image information is more than two, it is determined that a third person other than the user joins the periphery of the mobile terminal.
Step S103: the video window is automatically reduced to alert the user.
In step S103, when it is determined in step S102 that a third person other than the user is present around the mobile terminal, the video window is automatically narrowed to remind the user that the third person other than the user is watching the video call content secretly and to prevent the third person other than the user from continuing watching the video call content.
Preferably, while performing an operation of automatically reducing the video window to remind the user, an operation of saving image information around the mobile terminal in an album of the mobile terminal may be performed, so that a subsequent user can know who a third person other than the user is.
Step S104: the video window is kept unchanged.
In step S104, when it is determined in step S102 that no third person other than the user is present around the mobile terminal, that is, only the user around the mobile terminal is watching the video window of the mobile terminal, the video window is kept unchanged.
The method and the device have the advantages that when a user carries out video call through the mobile terminal, the image information around the mobile terminal is acquired at preset time intervals; judging whether a third person except the user joins the periphery of the mobile terminal or not according to the image information; and if a third person except the user joins the periphery of the mobile terminal, automatically reducing the video window to remind the user. By the mode, the method and the device can reduce the probability of privacy disclosure during video call of the user, so that the purpose of improving the safety of the video call of the user is achieved.
Fig. 2 is a flowchart of a method for automatically reducing a video window according to a second embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 2 if the results are substantially the same. As shown in fig. 2, the method comprises the steps of:
step S201: when a user carries out video call through the mobile terminal, image information around the mobile terminal is acquired at preset time intervals.
In step S201, the mobile terminal may include a device such as a mobile phone, a smart phone, a notebook computer, a Personal Digital Assistant (PDA), a tablet computer (PAD), and the like. The periphery of the mobile terminal is a view finding range of a front camera of the mobile terminal.
In the embodiment, when a user carries out a video call through the mobile terminal, image information within a viewing range of the front camera is acquired by the front camera of the mobile terminal every predetermined time.
Step S202: and extracting the biological characteristic information of the human face from the image information.
In step S202, the biometric information of the human face may include face shape information, size information, pupil information, facial feature information, and other information with identification, where the facial feature information may include facial features such as double eyelids, bang, moles, and the like.
It can be understood by those skilled in the art that if only a user exists around the mobile terminal, only the biometric information of the face of the user is extracted from the image information, and if a third person other than the user exists around the mobile terminal, the biometric information of the face of the user and the face of the third person other than the user are simultaneously extracted from the image information.
Step S203: and judging whether the extracted biological feature information of the face is matched with the preset biological feature information, if not, executing the step S204, otherwise, executing the step S205.
In step S203, the preset biometric information is obtained as follows: the method comprises the steps of obtaining image information of a face of a user, extracting biological feature information of the face of the user from the image information of the face of the user, and storing the biological feature information in a mobile terminal to form preset biological feature information.
In this embodiment, the preset biometric information may include facial features of the user different from those of the ordinary person, such as: the double-edged eyelid, the bang, the nevus and the like, so that whether the extracted biological characteristic information of the face is the biological characteristic information of the face of the user can be judged more quickly.
In this embodiment, it is determined whether the extracted biometric information of the face matches the preset biometric information, which is substantially determined whether a third person other than the user is within the viewing range of the front camera. That is to say, when the extracted biological feature information of the face is not matched with the preset biological feature information, it is determined that a third person other than the user joins the periphery of the mobile terminal; and when the extracted biological characteristic information of the face is matched with the preset biological characteristic information, determining that only the user is around the mobile terminal.
In this embodiment, a specific method for determining whether the extracted biometric information of the human face matches the preset biometric information may be a matching algorithm based on image features, a matching algorithm based on image gray, and the like.
Step S204: the video window is automatically reduced to alert the user.
In step S204, when it is determined in step S203 that the extracted biometric information of the face is not matched with the preset biometric information, that is, when a third person other than the user watches the video window of the mobile terminal around the mobile terminal, the video window is automatically reduced to remind the user.
Step S205: the video window is kept unchanged.
In step S205, when it is determined in step S203 that the extracted biometric information of the face matches the preset biometric information, that is, only the user around the mobile terminal is watching the video window of the mobile terminal, the video window is kept unchanged.
The method and the device have the advantages that when a user carries out video call through the mobile terminal, the image information around the mobile terminal is acquired at preset time intervals; extracting biological characteristic information of the human face according to the image information; judging whether the extracted biological feature information of the human face is matched with preset biological feature information or not; if not, the video window is automatically reduced to remind the user. By the mode, the method and the device can reduce the probability of privacy disclosure during video call of the user, so that the purpose of improving the safety of the video call of the user is achieved.
Fig. 3 is a flowchart of a method for automatically reducing a video window according to a third embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 3 if the results are substantially the same. As shown in fig. 3, the method comprises the steps of:
step S301: and providing a setting interface to enable a user to input parameters corresponding to the video window shrinking function on the setting interface.
In step S301, the video window shrinking function provides a setting interface for inputting parameters corresponding to the video window shrinking function, where the parameters include a preset focusing time, a preset size and a preset position of the shrunk video window, and preset biometric information.
In this embodiment, the zoom out video window function exists in the form of a background process that once turned on will run in the background. The video call function and the video window reduction function of the mobile terminal can be mutually independent and need to be respectively started; or automatically starting the function of reducing the video window after starting the video call function.
Step S302: when a user carries out video call through the mobile terminal, image information around the mobile terminal is acquired at preset time intervals.
In step S302, the mobile terminal may include a device such as a mobile phone, a smart phone, a notebook computer, a Personal Digital Assistant (PDA), a tablet computer (PAD), and the like. The periphery of the mobile terminal is a view finding range of a front camera of the mobile terminal.
In the embodiment, when a user carries out a video call through the mobile terminal, image information within a viewing range of the front camera is acquired by the front camera of the mobile terminal every predetermined time.
Step S303: and extracting the biological characteristic information of the human face from the image information.
In step S303, the biometric information of the human face may include face shape information, size information, pupil information, facial feature information, and other information with identification, where the facial feature information may include facial features such as double eyelids, bang, moles, and the like.
It can be understood by those skilled in the art that if only a user exists around the mobile terminal, only the biometric information of the face of the user is extracted from the image information, and if a third person other than the user exists around the mobile terminal, the biometric information of the face of the user and the face of the third person other than the user are simultaneously extracted from the image information.
Step S304: and judging whether the extracted biological feature information of the human face is matched with the preset biological feature information, if not, executing the step S305, otherwise, executing the step S309.
In step S304, preset biometric information is obtained as follows: the method comprises the steps of obtaining image information of a face of a user, extracting biological feature information of the face of the user from the image information of the face of the user, and storing the biological feature information in a mobile terminal to form preset biological feature information.
In this embodiment, the preset biometric information may include facial features of the user different from those of the ordinary person, such as: the double-edged eyelid, the bang, the nevus and the like, so that whether the extracted biological characteristic information of the face is the face of the user can be judged more quickly.
In this embodiment, it is determined whether the extracted biometric information of the face matches the preset biometric information, which is substantially determined whether a third person other than the user is within the viewing range of the front camera. That is to say, when the extracted biological feature information of the face is not matched with the preset biological feature information, it is determined that a third person other than the user joins the periphery of the mobile terminal; and when the extracted biological characteristic information of the face is matched with the preset biological characteristic information, determining that only the user is around the mobile terminal.
In this embodiment, a specific method for determining whether the extracted biometric information of the human face matches the preset biometric information may be a matching algorithm based on image features, a matching algorithm based on image gray, and the like.
Step S305: and judging whether the extracted biological feature information of the human face contains pupil information, if so, executing step S306, and otherwise, executing step S309.
In step S305, when it is determined in step S304 that the biometric information of the extracted face does not match the preset biometric information, it is determined that a third person other than the user is present around the mobile terminal, and at this time, it is further determined whether the biometric information of the extracted face includes pupil information.
In this embodiment, it is determined whether the extracted biometric information of the face has pupil information, which is substantially determined whether eyes of a third person other than the user gazes at a video window of the mobile terminal. That is, when the extracted biological feature information of the face includes pupil information, it is determined that eyes of a third person other than the user watch on a video window of the mobile terminal; and when the extracted biological characteristic information of the face does not contain pupil information, determining that eyes of a third person except the user watch on other places except the video window.
Step S306: the duration of a video window looking at the mobile terminal is obtained.
In step S306, when it is determined in step S305 that there is pupil information in the extracted biometric information of the face, the duration of the video window that is looking at the mobile terminal is further acquired.
The step of acquiring the duration of a video window gazing at the mobile terminal comprises: capturing frame images around the mobile terminal through a front camera at the frequency of n (n is less than or equal to 4) seconds; the frequency of capturing images by the front camera is set to be less than or equal to 4 seconds, and is based on that the frequency of normal human blinks is 5 seconds/time, and the error of the blinking can be reduced by capturing images at the frequency of 4 seconds or lower; analyzing the frame images, and if judging that the pupil information of a third person except the user does not exist in a certain frame image, confirming that the third person except the user does not concentrate on the video window of the mobile terminal; and finally, acquiring the frame number of the frame image without the pupil information of a third person except the user, and obtaining the duration time of the video window watching the mobile terminal according to the frame number.
Step S307: and judging whether the duration time exceeds the preset attention time, if so, executing the step S308, otherwise, executing the step S309.
In step S307, the preset focusing time may be set according to actual conditions, and may be, for example, 3 seconds, 5 seconds, or the like. In this embodiment, when the duration exceeds the preset attention time, it is determined that a third person other than the user is focused on the video window of the mobile terminal; when the duration does not exceed the preset focus time, it is determined that a third person other than the user has only accidentally seen the video window of the mobile terminal.
Step S308: the video window is automatically reduced to alert the user.
In step S208, when it is determined in step S207 that the duration exceeds the predetermined attention time, it indicates that a third person other than the user is paying attention to the video call content of the user, and at this time, the video window is automatically narrowed down to remind the user that the third person other than the user is watching the video call content secretly and to prevent the third person other than the user from continuing watching the video call content.
The operation of automatically reducing the video window specifically includes: and automatically reducing the video window according to a preset size and displaying the reduced video window at a preset position, wherein the preset size can be 1/2, 1/4 and the like of the original video window, and the preset position can be an edge position of the mobile terminal and the like.
Preferably, when it is determined in step S207 that the duration exceeds the predetermined attention time, an operation of saving image information of the periphery of the mobile terminal in an album of the mobile terminal may also be performed to facilitate a subsequent user to know who a third person other than the user is.
Step S309: the video window is kept unchanged.
In step S309, when it is determined in step S304 that the biometric information of the extracted human face matches the preset biometric information, or when it is determined in step S305 that there is no pupil information in the biometric information of the extracted human face, or when it is determined in step S307 that the duration of the video window of the mobile terminal watched by a third person other than the user does not exceed the preset attention time, it is determined that only the user is watching the video window of the mobile terminal around the mobile terminal, and the video window is kept unchanged.
The method and the device have the advantages that when a user carries out video call through the mobile terminal, the image information around the mobile terminal is acquired at preset time intervals; judging whether a third person except the user joins the periphery of the mobile terminal or not according to the image information; if a third person outside the user joins the mobile terminal, further judging whether the third person outside the user is concentrated in the video window of the mobile terminal; and if a third person except the user is concentrated in the video window of the mobile terminal, automatically reducing the video window to remind the user. By the method, the accuracy of automatically reducing the video window can be improved, the safety of video call of a user is improved, and the experience degree of the user on the video call function of the mobile terminal is guaranteed.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
In the present embodiment, the mobile terminal 300 includes a processor 21 and a camera circuit 22, and the processor 21 is coupled to the camera circuit 22. The camera circuit 22 is configured to obtain image information of the periphery of the mobile terminal, and the processor 21 is configured to respond and process according to the image information of the periphery of the mobile terminal obtained by the camera circuit 22, and implement the method for automatically reducing the video window set forth in the above embodiment.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a memory device according to an embodiment of the invention.
In the present embodiment, the storage device 400 stores the program data 401, and the program data 401 can be executed to implement the method for automatically reducing the video window described in the above embodiments, which will not be described herein again.
In the several embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are illustrative, and for example, the division of the modules or units into one logical functional division may be implemented in practice in another logical functional division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.