CN112632510A - Information protection method and storage medium - Google Patents

Information protection method and storage medium Download PDF

Info

Publication number
CN112632510A
CN112632510A CN202011633542.4A CN202011633542A CN112632510A CN 112632510 A CN112632510 A CN 112632510A CN 202011633542 A CN202011633542 A CN 202011633542A CN 112632510 A CN112632510 A CN 112632510A
Authority
CN
China
Prior art keywords
viewer
face
display screen
information
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011633542.4A
Other languages
Chinese (zh)
Inventor
区国雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Fushi Technology Co Ltd
Original Assignee
Shenzhen Fushi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Fushi Technology Co Ltd filed Critical Shenzhen Fushi Technology Co Ltd
Priority to CN202011633542.4A priority Critical patent/CN112632510A/en
Publication of CN112632510A publication Critical patent/CN112632510A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/84Protecting input, output or interconnection devices output devices, e.g. displays or monitors

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an information protection method for protecting information displayed on a display screen from being peeped by an unauthorized person. The information protection method comprises the following steps: acquiring three-dimensional data of the face of a viewer; judging whether the viewer has peeping risk according to the acquired three-dimensional data of the face of the viewer; after judging that the viewer has the peeping risk, identifying whether the viewer is a preset authorizer by matching the difference between the three-dimensional face data of the viewer and a preset authorizer identity characteristic template; and performing an information protection operation when the viewer at risk of peeping is identified as an unauthorized person. The present application also provides a storage medium.

Description

Information protection method and storage medium
Technical Field
The present application relates to the field of biometric identification, and in particular, to an information protection method and a storage medium.
Background
With the increasingly powerful functions of electronic devices such as notebook computers, tablet computers, mobile phones and self-service terminals, more and more important things need to be handled on the electronic devices, but the risk that the electronic devices leak important information is increased. For example: when a user operates the electronic equipment in a public place, the user is easy to peep by people nearby, and information leakage is caused. Therefore, how to prevent peeping by other people when using electronic devices becomes an important problem to be solved for information protection.
Disclosure of Invention
The technical problem to be solved by the present application is to provide an information protection method and a storage medium, so as to prevent information leakage caused by peeping by other people when using an electronic device.
The embodiment of the application provides an information protection method for protecting information displayed on a display screen from being peeped by an unauthorized person. The information protection method comprises the following steps:
acquiring three-dimensional data of the face of a viewer;
judging whether the viewer has peeping risk according to the acquired three-dimensional data of the face of the viewer;
after judging that the viewer has the peeping risk, identifying whether the viewer is a preset authorizer by matching the difference between the three-dimensional face data of the viewer and a preset authorizer identity characteristic template; and
performing an information protection operation upon identifying that the viewer at risk of peeping is an unauthorized person.
In some embodiments, if the distance between the face of the viewer and the display screen is smaller than a preset peeping distance threshold, it is determined that the viewer has a peeping risk; or
If the visible area of the viewer is overlapped with the preset key area of the display screen, judging that the viewer has peeping risk; or
If the distance between the face of the viewer and the display screen is smaller than a preset peeping distance threshold value and the visible area of the viewer is overlapped with a preset key area of the display screen; then the viewer is judged to be at risk of peeping.
In some embodiments, the method of calculating the viewable area of the viewer comprises the steps of:
taking the face orientation of the viewer as a center, deviating the set eye visual angle threshold value from the face orientation along a preset direction to construct an eye visual angle range of the viewer; and
and calculating the area covered by the eye visual angle range of the viewer on the plane of the display screen as the visual area of the viewer at the current position and with the face facing downwards.
In some embodiments, the method of calculating the orientation of the viewer's face comprises the steps of:
extracting three-dimensional data of the face feature points of the viewer;
connecting the extracted facial feature points to construct a corresponding facial reference plane; and
and calculating a vertical vector of the reference face of the face as a vector of the orientation of the face.
In some embodiments, the method for acquiring three-dimensional data of the face of the viewer is based on one or more of the structural light sensing principle, the time-of-flight sensing principle and the binocular vision sensing principle.
In some embodiments, the information protection operation includes turning off the display screen, popping up a text prompt for privacy protection, changing the brightness of the display screen, sending a sound prompt for privacy protection, recording relevant evidence of privacy risk, and automatically alarming.
In some embodiments, before acquiring the three-dimensional data of the face of the viewer, the method further comprises the following steps:
acquiring environment information of an environment where a display screen is located and/or state information of the display screen;
and executing the information protection method according to the comparison result of the acquired environment information and/or state information and preset induction reference information.
In some embodiments, the sensing reference information includes an audio feature template, a proximity distance threshold, a face number threshold, and an acceleration change threshold, the acquired environmental sound information is compared with the audio feature template obtained by training in a specific scene in advance, and the information protection method is executed when the display screen is determined to be in the preset specific scene after comparison;
or, obtaining a distance between a viewer and a display screen, and executing the information protection method when the distance between the viewer and the display screen is smaller than a preset approach distance threshold;
or acquiring image information in front of a display screen, and executing the information protection method when the number of human faces appearing in the image in front of the display screen exceeds a preset human face number threshold value;
or the automatic sensor acquires the acceleration change condition of the display screen, and the information protection method is executed when the sensed acceleration change amplitude exceeds a preset acceleration change threshold value.
A storage medium is provided for storing program code executable by one or more processors. The program can be executed by the processor to implement the information protection method as described in the above embodiment.
According to the information protection method, the fact that the identity of the viewer is identified after the fact that the viewer has the peeping risk is judged according to the collected three-dimensional data of the viewer in front of the display screen can be avoided, and the identity identification based on the three-dimensional data and with large power consumption can be avoided from being frequently carried out. Secondly, when the viewer with the peeping risk is identified as an unauthorized person, corresponding information protection operation can be automatically performed to prevent important information on the display screen from being peeped.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device provided with an information protection apparatus according to an embodiment of the present application.
Fig. 2 is a functional block diagram of the electronic device shown in fig. 1.
Fig. 3 is a schematic diagram of calculation of a viewer's visible region in the embodiment of the present application.
Fig. 4 is a schematic diagram of calculation of the orientation of the viewer's face in the embodiment of the present application.
FIG. 5 is a schematic diagram of the calculation of the orientation of the viewer's face in another embodiment of the present application.
Fig. 6 is a schematic functional module diagram of the electronic device according to another embodiment of the present application.
Fig. 7 is a flowchart of an information protection method based on the information protection apparatus according to an embodiment of the present application.
Fig. 8 is a flowchart of the substeps of step S102 in fig. 7.
Fig. 9 is a flowchart of the substeps of step S103 in fig. 7.
Fig. 10 is a flowchart illustrating the sub-steps of step S104 according to an embodiment of the present application.
Fig. 11 is a flowchart illustrating the sub-steps of step S104 according to another embodiment of the present application.
Fig. 12 is a flowchart illustrating the sub-steps of step S104 according to yet another embodiment of the present application.
Fig. 13 is a flowchart of an information protection method based on the information protection apparatus according to another embodiment of the present application.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application. In the description of the present application, it is to be understood that the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any order or number of technical features indicated. Thus, features defined as "first" and "second" may explicitly or implicitly include one or more of the described features. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the description of the present application, it should be noted that, unless explicitly stated or limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; either mechanically or electrically or in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship or combination of two or more elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
The following disclosure provides many different embodiments, or examples, for implementing different features of the application. In order to simplify the disclosure of the present application, only the components and settings of a specific example are described below. Of course, they are merely examples and are not intended to limit the present application. Moreover, the present application may repeat reference numerals and/or letters in the various examples, such repeat use is intended to provide a simplified and clear description of the present application and may not in itself dictate a particular relationship between the various embodiments and/or configurations discussed. In addition, the various specific processes and materials provided in the following description of the present application are only examples of implementing the technical solutions of the present application, but one of ordinary skill in the art should recognize that the technical solutions of the present application can also be implemented by other processes and/or other materials not described below.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject technology can be practiced without one or more of the specific details, or with other structures, components, and so forth. In other instances, well-known structures or operations are not shown or described in detail to avoid obscuring the focus of the application.
Referring to fig. 1 and fig. 2 together, an embodiment of the present application provides an information protection apparatus 1 for preventing information leakage caused by peeping by a nearby unauthorized person when operating an electronic device 2. The electronic device 2 may be, but is not limited to: computers, mobile phones, or self-service terminals, etc. The self-service terminal is, for example, an Automatic Teller Machine (ATM) of a bank, a touch interactive terminal arranged in a bank hall or a government affair hall for handling self-service business, and the like. The electronic device 2 comprises a display screen 3, the display screen 3 being adapted to display information during operation. The information protection device 1 is used for sending out prompt or hiding information when detecting that an unauthorized person meeting peeping judgment conditions appears in front of the display screen 3 so as to protect the information displayed on the display screen 3 from being peeped by the unauthorized person.
The information protection device 1 includes an information collector 12, a processor 14, and a control system 16. The information collector 12 is configured to collect depth information in a scene, and the depth information may be used in the fields of identity recognition or state sensing after being analyzed. Optionally, in some embodiments, the depth information comprises three-dimensional data of the distance between the viewer and the display screen 3 and the viewer's face. The distance between the viewer and the display screen 3 is used to determine whether the viewer has entered a peep-able range to decide whether the information protection device 1 needs to be woken up. And the logical operation of the three-dimensional data of the viewer's face by means of the processor 14 may implement a three-dimensional reconstruction of the viewer's face to construct a three-dimensional point cloud of the viewer's face. The information protection apparatus 1 can perform identification by matching and analyzing the difference between the acquired three-dimensional data of the viewer's face and the three-dimensional data of the authorizer's face. The three-dimensional data of the face of the authorizer is obtained by three-dimensionally scanning the face of the legally authorized user of the electronic device 2 through the information collector 12, and can be stored for use in subsequent identification. In addition, in some embodiments, the information collector 12 may further collect and identify three-dimensional data by using infrared light, so that the three-dimensional data of the face of the viewer can be accurately obtained even in a dark environment, and the three-dimensional data has better adaptability and stability. Moreover, because the reflection and absorption of infrared light by human skin are different, the information collector 12 can also be used for distinguishing real human skin from forged human face models or human face photos, and the reliability of identity recognition is improved.
Optionally, in some embodiments, the information collector 12 is disposed on the display screen 3. The information collector 12 and the display screen 3 have a fixed relative position relationship, and the depth information of the external object acquired by the information collector 12 relative to the information collector 12 can be converted into the depth information of the corresponding external object relative to the display screen 3 through geometric conversion. Optionally, in some other embodiments, the information collector 12 may also be disposed at other positions of the electronic device 2, which is not limited in this application.
It will be appreciated that in some embodiments, the information collector 12 may comprise one or more sets of components corresponding to one or more of the three-dimensional sensing principles employed. For example: the optical sensor system includes a structured light emitter (not shown) and an image sensor (not shown) according to the structured light sensing principle, at least two image sensors (not shown) according to the binocular vision sensing principle, or a light emitter (not shown) and a light receiver (not shown) according to the Time of Flight (TOF) sensing principle.
Fig. 2 is a schematic diagram of a functional module 160 of an electronic device 2 provided with the information protection apparatus 1 according to an embodiment of the present invention. The electronic device 2 includes a storage medium 22 and a power supply 24. Information collector 12, processor 14, power supply 24, storage medium 22, and control system 16 may be interconnected via a bus to transmit data and signals to each other.
The power supply 24 may provide power for each component of the electronic device 2 by connecting to the mains and performing corresponding adaptation processing. The power source 24 may also include an energy storage element such as a battery to provide power to the various components via the battery.
The storage medium 22 includes, but is not limited to, a Flash Memory (Flash Memory), a charged Erasable Programmable read only Memory (EEPROM), a Programmable Read Only Memory (PROM), and a hard disk. The storage medium 22 is used for storing the preset identity characteristic template of the authorizer, intermediate data generated in the identification process, computer software codes for realizing identification and control functions, and the like.
Optionally, in some embodiments, the control system 16 includes one or more functional modules 160, and the functional modules 160 include, but are not limited to, a setting module 161, a depth information obtaining module 162, a face orientation calculating module 163, a visual area calculating module 164, a recognition module 165, a determination module 166, and an information protection module 167. The functional module 160 may be firmware that is solidified within the corresponding storage medium 22 or computer software code that is stored within the storage medium 22. The functional modules 160 are executed by the corresponding one or more processors 14 to control the relevant components to implement the corresponding functions, such as: and (4) information protection function.
It is understood that in other embodiments, corresponding functions of one or more of the functional modules 160 in the control system 16 may be implemented by hardware, for example, any one or combination of the following hardware: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
The setting module 161 is used to preset various parameter information needed in the process of information protection. The parameter information includes, but is not limited to, sensing reference information required for auto-sensing wakeup, a peeping judgment condition according to which peeping judgment is performed, reference data, and an identity feature template required for identity recognition. The parameter information may be stored in the storage medium 22. It is understood that the parameter information may be preset by a manufacturer before the product leaves the factory, or may be set or adjusted by a user during the product use.
As shown in fig. 3, in some embodiments, the reference data includes, but is not limited to, a screen reference direction Z, a distance threshold between the viewer's face and the display screen 3, and an eye viewing angle threshold a. The screen reference direction Z is a direction perpendicular to the surface of the display screen 3. The information collector 12 is fixedly arranged relative to the display screen 3, and the display screen 3 has a determined coordinate value in a coordinate system established by taking the information collector 12 as a reference point, so that a vector direction perpendicular to the surface of the display screen 3 can be defined as the screen reference direction Z. The distance H between the viewer's face and the display screen 3 refers to a minimum distance value between the viewer's face and the display screen 3 acquired by the information collector 12. And if the distance between the face of the viewer and the display screen 3 is smaller than a preset peeping distance threshold value, the viewer is considered to have a peeping risk. Alternatively, the peep distance threshold may be 20 centimeters, 30 centimeters, 45 centimeters, 60 centimeters, or 80 centimeters, among others.
The eyes of the person can rotate within a certain angle range, so that the viewer can observe a situation deviating from the face toward the face within a certain angle range while the head remains still. To this end, the eye-viewable angle threshold may be defined as the maximum range of angles at which the viewer's eye line of sight E is offset from the face orientation F. In order to effectively prevent the viewer from peeping, it is necessary that the visual region S in which the plane of the display screen 3 is covered by the visual angle range of the viewer' S eyes is not overlapped with the display screen 3 or a local region in the display screen 3.
Optionally, in some embodiments, the eye-viewable angle threshold may be 45 degrees, 50 degrees, 60 degrees, 70 degrees, or 80 degrees. It will be appreciated that the eye-viewable angle threshold may take the same angular value or different angular values in different directions, for example: the eye-viewable angle threshold in the horizontal direction of the head is 60 degrees, and the eye-viewable angle threshold in the vertical direction of the head is 45 degrees. Therefore, a conical observer eye visual angle range can be constructed by setting eye visual angle thresholds along different directions of the head, and the area covered by the observer eye visual angle range on the plane of the display screen 3 is the visual area S of the observer under the current face orientation F. If the visual area S overlaps with the display screen 3, it can be considered that the viewer has a risk of peeping at the current position and in a state where the face is facing the F; on the contrary, if the visible area S does not overlap with the display screen 3, it can be considered that there is no risk of peeping in the state where the viewer is at the current position and the face is facing to the face F.
Optionally, in some embodiments, not all of the display screen 3 may be invisible, and only a local area of the display screen 3 for displaying important information may be restricted from being visible. In this case, the viewer is considered to be at risk of peeping when the viewing area S overlaps with a preset local area in the display screen 3. For convenience of explanation, the area of the display screen 3 that is not allowed to be peeped is collectively defined as a key area K, which may be the entire display screen 3 or a local area within the display screen 3 for displaying important information.
Optionally, in some embodiments, the peeping judgment condition includes a first judgment condition and/or a second judgment condition. The first judgment condition is that the distance H between the face of the viewer and the display screen 3 is smaller than a preset peeping distance threshold value. The second judgment condition is that the visible area S of the viewer overlaps with the display screen 3 or a preset local area in the display screen 3. It is to be appreciated that in some embodiments, the viewer is determined to be at risk of peeping when either of the first or second determination conditions is satisfied. Alternatively, in some other embodiments, the first and second conditions must be satisfied simultaneously to determine that the viewer is peeping.
The depth information obtaining module 162 is configured to control the information collector 12 to obtain depth data in a scene, and analyze and process the depth data collected by the information collector 12 to obtain depth information in the scene. Optionally, in some embodiments, the depth information comprises a distance H between the viewer and the display screen 3 and three-dimensional data of the viewer's face. The method for acquiring the depth information is determined according to the three-dimensional sensing principle adopted by the information collector 12. For example: in some embodiments, information collector 12 projects a patterned light beam to a viewer or a space where the viewer is located, for example: a speckle beam, and a light pattern formed by the patterned beam in a viewer or space in which the viewer is located. The depth information acquisition module 162 acquires depth information of the viewer's face or the space in which the viewer is located by calculating a distortion value between the acquired light pattern and a preset reference plane light pattern. In other embodiments, the information collector 12 uses TOF principle to transmit light beams to the viewer or the space where the viewer is located at a specific frequency/time period, and then receive the light beams reflected back from the viewer or the space where the viewer is located. The depth information acquiring module 162 acquires depth information of the viewer or a space thereof by calculating a time difference required for the light beam to be received from the emission. In other embodiments, the information collector 12 and the depth information acquiring module 162 may also use binocular vision sensing principle to acquire the depth of the viewer or the space where the viewer is located.
The face orientation F calculation module 163 is configured to calculate the face orientation F of the viewer according to the acquired three-dimensional data of the face of the viewer. Alternatively, in some embodiments, as shown in fig. 4, the face orientation F calculation module 163 extracts feature points of the viewer's face, acquires three-dimensional data of the face feature points, connects the extracted face feature points to construct a corresponding face reference plane, and calculates a vertical vector of the face reference plane as the vector of the face orientation F according to the three-dimensional data of the face feature points constructing the face reference plane. Alternatively, the facial feature points may be set to, but not limited to, the left eye, the right eye, the tip of the nose, and the corner of the mouth, and the left/right eyes may also be the corresponding corners of the eyes.
The number of viewer face feature points to which the face orientation F calculation module 163 is connected is not limited in principle, but it is necessary to satisfy that the face feature points used for constructing the face reference plane need to be located on the same plane so that the constructed face reference plane is a plane. Alternatively, in some embodiments, the face orientation F calculation module 163 may construct a face reference plane by connecting three feature points of the viewer's face. For example: as shown in fig. 4, the left eye, the right eye and the left mouth corner form a face reference plane, and the face orientation F of the viewer can be obtained by calculating the vertical vector of the face reference plane.
It will be appreciated that the face orientation F calculated by choosing different facial feature points to construct the facial reference plane will vary slightly. Alternatively, in some embodiments, as shown in fig. 5, the face orientations F1 and F2 may be calculated for two different face reference planes, respectively, and the vector direction obtained by summing the two vectors is used as the face orientation F. By analogy, the final face orientation F may also be obtained by sequentially adding the vertical vectors of the plurality of face reference planes constructed by the plurality of different facial feature point correspondences.
The visible area calculating module 164 is configured to calculate a range of a visible area S of the viewer on the plane of the display screen 3 according to the face orientation F of the viewer and a preset eye viewing angle threshold. Alternatively, in some embodiments, as shown in fig. 3, the viewable area calculating module 164 is centered on the face orientation F of the viewer, and deviates from the face orientation F in a preset direction by a set eye-viewable angle threshold to construct an eye-viewable angle range of the viewer. Alternatively, the range of angles visible to the viewer's eye may be substantially a three-dimensional cone-shaped structure. The visible area calculating module 164 obtains an area S covered by the range of the angle of visibility of the eyes of the viewer on the plane of the display screen 3, for example: and calculating coordinate values of coordinate points on the boundary of the area S and in the area S, and taking the area S as a visual area S of the viewer under the current position and the face orientation F. It will be appreciated that the above calculations of the visible area S are all performed in a coordinate system established with the information collector 12 as a reference point.
The judging module 166 is configured to judge whether there is a peeping risk in the current viewer according to a preset peeping judging condition. Optionally, if the peeping judgment condition is set to only need to satisfy the first judgment condition: if the distance between the face of the viewer and the display screen 3 is smaller than the preset peeping distance threshold, the judging module 166 compares the distance between the face of the current viewer and the display screen 3, which is acquired by the information acquirer 12, with the preset peeping distance threshold, and judges that the current viewer has a risk of peeping when the distance between the face of the viewer and the display screen 3 is smaller than the preset peeping distance threshold.
Optionally, if the peeping judgment condition is set to only need to satisfy a second judgment condition: if the visible area S of the viewer overlaps the preset key area K of the display screen 3, the determining module 166 performs range comparison between the visible area S of the viewer at the current position and with the face facing downward from the current position and the face facing downward from the key area K of the display screen 3, if the visible area S overlaps the key area K of the display screen 3, the determining module 166 determines that the viewer is at present at a risk of peeping, and if the visible area S does not overlap the key area K of the display screen 3, the determining module 166 determines that the viewer is at present at no risk of peeping. It can be understood that the current visible area S of the viewer and the key area K of the display screen 3 may both be marked in a coordinate system established by the information collector 12 as a reference point, and the determining module 166 may determine whether the current visible area S of the viewer is overlapped with the key area K of the display screen 3 according to whether a coordinate point in the current visible area S of the viewer is located in a coordinate range of the key area K of the display screen 3, or may determine whether the current visible area S of the viewer is overlapped with the key area K of the display screen 3 according to whether a coordinate point in the key area K of the display screen is located in a coordinate range of the current visible area S of the.
Optionally, if the peeping determination condition is set to satisfy the first determination condition and the second determination condition at the same time, the determination module 166 determines that the current viewer has a risk of peeping when the distance between the face of the current viewer and the display screen 3 is smaller than the preset peeping distance threshold and the visible region S of the current viewer overlaps with the key region K of the display screen 3.
The identification module 165 is configured to identify whether the identity of the current viewer is a preset authorizer when it is determined that the current viewer is at risk of peeping. Optionally, in some embodiments, the identifying module 165 identifies whether the viewer is a preset authorizer by matching and analyzing the difference between the three-dimensional data of the viewer's face obtained by the information collector 12 and a preset authorizer identity feature template.
The information protection module 167 is configured to control the electronic device 2 to perform a corresponding information protection operation when it is identified that the viewer at risk of peeping is not a preset authorized person. Optionally, in some embodiments, the information protection operation includes, but is not limited to, turning off the display screen 3, popping up a text prompt for privacy-conscious peeping, changing the brightness of the display screen 3, issuing a sound prompt for privacy-conscious peeping, recording relevant evidence of a viewer at risk of peeping, automatic alarm, and the like.
Optionally, in some embodiments, as shown in fig. 6, the information protection apparatus 1 may further include an auto-sensor 16. The automatic sensor 16 is configured to obtain environmental information of an environment in which the electronic device 2 is located and/or status information of the electronic device 2. The environment information includes sound information, image information, depth information, and the like. The state information includes acceleration information and the like.
Accordingly, in some embodiments, the control system 16 further includes an auto-induction module 168. The setting module 161 is preset with sensing reference information. The inductive reference information may be pre-stored in the storage medium 22 for triggering a wake-up function of the information protection apparatus 1. The auto-induction module 168 compares the environmental information and/or the state information with preset induction reference information to wake up or turn off the information protection apparatus 1. Optionally, the sensing reference information includes an audio feature template, a proximity distance threshold, a face number threshold, an acceleration change threshold, and the like. For example; the auto-induction module 168 may compare the acquired environmental sound information with an audio feature template obtained by training in a specific scene in advance, and automatically wake up/shut down the information protection apparatus 1 when it is determined that the electronic device 2 is in the preset specific scene after the comparison. Or, the auto-sensing module 168 analyzes the acquired distance between the viewer and the display screen 3, and automatically wakes up the information protection device 1 when the distance between the viewer and the display screen 3 is smaller than a preset proximity threshold, optionally, the proximity threshold may be 1 meter, 2 meters, 3 meters, and the like. Or, the automatic sensing module 168 analyzes the acquired image information in front of the display screen 3, and automatically wakes up the information protection device 1 when it is analyzed that the number of faces appearing in the image in front of the display screen 3 exceeds a preset face number threshold, optionally, the face number threshold may be 1, 2, 3, or the like. Or, according to the sensed acceleration change condition of the electronic device 2, when the sensed acceleration change amplitude exceeds a preset acceleration change threshold, it is determined that the electronic device 2 is picked up, and the information protection device 1 is automatically woken up.
Correspondingly, in some embodiments, the auto-sensor 16 includes, but is not limited to, a microphone, an image sensor, a proximity sensor, an acceleration sensor, or a combination of one or more thereof. It is understood that the image information for auto-induction can be obtained by the image sensor in the information collector 12, or can be obtained by another specially-arranged image sensor.
It can be understood that the working power consumption of the auto-sensor 16 is significantly less than the working power consumption of the information collector 12, and the electronic device 2 can wake up the information protection device 1 to collect three-dimensional data through the information collector 12 when there is a possibility of peeping by setting the auto-sensor 16 and the corresponding function module 160, so that the information collector 12 with higher power consumption works for a long time, and the whole power consumption of the electronic device 2 can be reduced as much as possible while the information protection function is realized.
Referring to fig. 2 and fig. 7, an embodiment of the present application further provides a method for protecting information of an electronic device 2 by using the information protection apparatus 1. The information protection method comprises the following steps:
step S101, three-dimensional data of the viewer' S face is acquired. Optionally, in some embodiments, processor 14 controls information collector 12 to obtain three-dimensional data of the face of the viewer in front of display screen 3 by executing depth information obtaining module 162. The method for acquiring the depth information is determined according to the three-dimensional sensing principle adopted by the information collector 12. For example: in some embodiments, information collector 12 projects a patterned light beam to a viewer or a space where the viewer is located, for example: a speckle beam, and a light pattern formed by the patterned beam in a viewer or space in which the viewer is located. The depth information acquisition module 162 acquires depth information of the viewer's face or the space in which the viewer is located by calculating a distortion value between the acquired light pattern and a preset reference plane light pattern. In other embodiments, the information collector 12 uses TOF principle to transmit light beams to the viewer or the space where the viewer is located at a specific frequency/time period, and then receive the light beams reflected back from the viewer or the space where the viewer is located. The depth information acquiring module 162 acquires depth information of the viewer or a space thereof by calculating a time difference required for the light beam to be received from the emission. In other embodiments, the information collector 12 and the depth information acquiring module 162 may also use binocular vision sensing principle to acquire the depth of the viewer or the space where the viewer is located.
In step S102, the face orientation F of the viewer is calculated. Optionally, in some embodiments, the processor 14 calculates the face orientation F of the viewer according to the acquired three-dimensional data of the face of the viewer by executing the face orientation F calculation module 163. Optionally, in some embodiments, referring to fig. 4, fig. 5 and fig. 8, the step S102 may further include the following sub-steps:
and step S1021, extracting three-dimensional data of the face feature points of the viewer. Optionally, in some embodiments, the processor 14 extracts three-dimensional data of facial feature points from the acquired three-dimensional data of the viewer's face by executing the face orientation F calculation module 163. Alternatively, the facial feature points may be set to, but not limited to, the left eye, the right eye, the tip of the nose, and the corner of the mouth, and the left/right eyes may also be the corresponding corners of the eyes.
In step S1022, the extracted facial feature points are connected to construct a corresponding facial reference plane. Optionally, in some embodiments, the processor 14 connects the extracted viewer facial feature points by executing the face orientation F calculation module 163 to construct a corresponding facial reference plane. In this step, the number of connected viewer facial feature points is not limited in principle, but it is necessary to satisfy that the facial feature points used for constructing the facial reference plane need to be located on the same plane so that the constructed facial reference plane is a plane. Alternatively, in some embodiments, the facial reference plane may be constructed by connecting three feature points of the viewer's face. For example: as shown in fig. 4, the left eye, the right eye and the left mouth corner form a face reference plane, and the face orientation F of the viewer can be obtained by calculating the vertical vector of the face reference plane.
In step S1023, a vertical vector of the face reference plane is calculated as a vector of the face orientation F. Alternatively, in some embodiments, the processor 14 calculates the vertical vector of the face reference plane as the vector of the face orientation F by executing the face orientation F calculation module 163 from the three-dimensional data of the facial feature points constructing the face reference plane.
It will be appreciated that the face orientation F calculated by choosing different facial feature points to construct the facial reference plane will vary slightly. Optionally, in some embodiments, the face orientation F may also be calculated for two different face reference planes, and then a vector direction obtained by summing the two vectors is taken as the face orientation F. By analogy, the finally determined face orientation F may also be obtained by sequentially adding the vertical vectors of the plurality of face reference planes constructed by the plurality of different facial feature point correspondences.
In step S103, a range of a viewing area S of the viewer on the plane of the display screen 3 is calculated according to the face orientation F of the viewer and a preset eye viewing angle threshold. The processor 14 executes the visible region S calculating module 164 to calculate the visible region S range of the viewer on the plane of the display screen 3 according to the obtained face orientation F of the viewer and the preset eye viewing angle threshold. Optionally, in some embodiments, referring to fig. 3 and fig. 9, the step S103 may further include the following sub-steps:
and step S1031, constructing the visual angle range of the eyes of the viewer. Optionally, in some embodiments, the processor 14 executes the visual area S calculation module 164 to center the face orientation F of the viewer and respectively deviate from the face orientation F by the set eye-viewing angle threshold values along the preset direction to construct the eye-viewing angle range of the viewer. Alternatively, the range of angles visible to the viewer's eye may be substantially a three-dimensional cone-shaped structure.
Step S1032 is to calculate an area covered by the eye viewing angle range of the viewer on the plane where the display screen 3 is located, and use the area as the viewing area S of the viewer in the current position and face orientation F. Optionally, in some embodiments, the processor 14 obtains the area covered by the constructed observer eye visibility angle range on the plane of the display screen 3 by executing the visibility area S calculation module 164, for example: coordinate values of coordinate points on and inside the area boundary are calculated, and the area is taken as a visible area S of the viewer at the current position and face orientation F. It will be appreciated that the above calculations of the visible area S are all performed within the coordinate system established by the information collector 12 for the reference point.
And step S104, judging whether the current viewer has peeping risks or not according to preset peeping judgment conditions. Optionally, in some embodiments, the processor 14 executes the determining module 166 to determine whether there is a peep risk in the current viewer according to a preset peep determining condition.
Alternatively, in some embodiments, as shown in fig. 10, if the peeping determination condition is set to only need to satisfy the first determination condition: the distance between the face of the viewer and the display screen 3 is smaller than a preset distance threshold, and the step S104 includes the following sub-steps:
in step S1041, the distance between the face of the current viewer and the display screen 3 is acquired. Optionally, in some embodiments, processor 14 executes determination module 166 to obtain the distance between the current viewer's face and display screen 3 via information collector 12.
Step S1042, comparing the acquired distance between the current viewer' S face and the display screen 3 with a preset peeping distance threshold. Optionally, in some embodiments, processor 14 executes decision module 166 to compare the distance between the current viewer's face acquired by information collector 12 and display screen 3 to a preset peep distance threshold.
And S1043, judging that the current viewer has the risk of peeping when the distance between the face of the viewer and the display screen 3 is smaller than a preset peeping distance threshold value. And when the distance between the face of the viewer and the display screen 3 is larger than a preset peeping distance threshold value, judging that the current viewer has no peeping risk.
Alternatively, in some embodiments, as shown in fig. 11, if the peeping determination condition is set to only need to satisfy the second determination condition: the visible area S of the current viewer overlaps with a preset key area of the display screen 3, and the step S104 includes the following sub-steps:
step S1044 of comparing the current viewer visual area S with a preset key area of the display screen 3. The processor 14 executes a decision block 166 to perform a range comparison of the viewable area S of the viewer at the current position and face orientation F with key areas of the display screen 3. It can be understood that both the current visible area S of the viewer and the key area of the display screen 3 may be marked in the coordinate system established by the information collector 12 for the reference point, and whether the current visible area S of the viewer is overlapped may be determined according to whether the coordinate point in the current visible area S of the viewer is located in the coordinate range of the key area of the display screen 3, or whether the current visible area S of the viewer is overlapped may be determined according to whether the coordinate point in the key area S of the display screen 3 is located in the coordinate range of the current visible area S of the viewer.
Step S1045, if the visible area S overlaps with the key area of the display screen 3, determining that the viewer is at risk of peeping. If the visible area S does not overlap with the key area of the display screen 3, the determining module 166 determines that the viewer is not at present at the risk of peeping.
Optionally, in some embodiments, as shown in fig. 12, if the peeping determination condition is set to require that the first determination condition and the second determination condition are satisfied at the same time, the step S104 includes the following sub-steps:
in step S1046, the distance between the face of the current viewer and the display screen 3 is acquired. Processor 14 executes decision module 166 to obtain the distance between the current viewer's face and display screen 3 via information collector 12.
Step S1047, comparing the acquired distance between the current viewer' S face and the display screen 3 with a preset peeping distance threshold. Processor 14 executes decision module 166 to compare the distance between the current viewer's face acquired by information collector 12 and display screen 3 to a preset peep distance threshold.
Step S1048, when the distance between the current viewer 'S face and the display screen 3 is smaller than a preset peeping distance threshold, comparing the obtained viewer' S visible region S with a preset key region of the display screen 3.
Step S1049, if the distance between the face of the current viewer and the display screen 3 is smaller than the preset distance threshold and the visible area S of the current viewer overlaps with the key area of the display screen 3, determining that the current viewer is at risk of peeping. And if the distance between the face of the current viewer and the display screen 3 is larger than a preset distance threshold value or the visible area S of the viewer is not overlapped with the key area of the display screen 3, judging that the current viewer has no peeping risk.
Step S105, if the current viewer is judged to have the risk of peeping, whether the identity of the viewer is a preset authorizer is identified.
Optionally, in some embodiments, processor 14 identifies whether the viewer is a predetermined authorizer by executing identification module 165 to match and analyze the differences between the three-dimensional data of the viewer's face obtained by information collector 12 and a predetermined authorizer identity template.
Step S106, if it is identified that the viewer with the peeping risk is not the preset authorized person, the electronic device 2 is controlled to execute the corresponding information protection operation. Optionally, in some embodiments, the processor 14 implements the corresponding information protection operation by executing the information protection module 167. The information protection operation includes, but is not limited to, turning off the display screen 3, popping up a text prompt for privacy protection, changing the brightness of the display screen 3, giving a sound prompt for privacy protection, recording relevant evidence of viewers at risk of privacy protection, automatic alarm, etc.
According to the information protection device 1 and the corresponding information protection method, the fact that the identity of the viewer is identified after the fact that the viewer has the peeping risk is judged according to the collected three-dimensional data of the viewer in front of the display screen 3 can be avoided, and the fact that the identity identification based on the three-dimensional data with large power consumption is frequently carried out can be avoided. Secondly, when the viewer with the peeping risk is identified as an unauthorized person, corresponding information protection operation can be automatically performed to prevent important information on the display screen 3 from being peeped.
Referring to fig. 6 and fig. 13 together, in some other embodiments, the information protection method provided by the present application further includes step S100 executed before step S101:
step S100, and steps S101 to S106 of starting the information protection method according to the environment information of the environment where the electronic device 2 is located and/or the state information of the electronic device 2. The processor 14 executes the auto-induction module 168 to start or stop the information protection method according to the environmental information and the status information of the electronic device 2 obtained by the auto-inductor 16. The environment information includes sound information, image information, depth information, and the like. The state information includes acceleration information and the like.
Specifically, the environmental information and/or the status information of the electronic device 2 are acquired through the automatic sensor 16, and the acquired environmental information and/or status information are compared with preset sensing reference information. The processor 14 compares the acquired environment information and/or state information with preset sensing reference information by executing the auto-sensing module 168, and wakes up or shuts down the information protection apparatus 1 according to the comparison result.
Optionally, the sensing reference information includes an audio feature template, a proximity distance threshold, a face number threshold, an acceleration change threshold, and the like. For example; the auto-induction module 168 may compare the acquired environmental sound information with an audio feature template obtained by training in a specific scene in advance, and automatically wake up/shut down the information protection apparatus 1 when it is determined that the electronic device 2 is in the preset specific scene after the comparison. Or, the auto-sensing module 168 analyzes the acquired distance between the viewer and the display screen 3, and automatically wakes up the information protection device 1 when the distance between the viewer and the display screen 3 is smaller than a preset proximity threshold, optionally, the proximity threshold may be 1 meter, 2 meters, 3 meters, and the like. Or, the automatic sensing module 168 analyzes the acquired image information in front of the display screen 3, and automatically wakes up the information protection device 1 when it is analyzed that the number of faces appearing in the image in front of the display screen 3 exceeds a preset face number threshold, optionally, the face number threshold may be 1, 2, 3, or the like. Or, according to the sensed acceleration change condition of the electronic device 2, when the sensed acceleration change amplitude exceeds a preset acceleration change threshold, it is determined that the electronic device 2 is picked up, and the information protection device 1 is automatically woken up.
It will be appreciated that the operational power consumption of the auto-sensor 16 is significantly less than the operational power consumption of the information collector 12. According to the information protection method, the information protection device 1 is awakened to collect the three-dimensional data through the information collector 12 when the peeping condition is sensed to possibly occur, so that the information collector 12 with high power consumption is prevented from working for a long time, the information protection function can be realized, and the overall power consumption of the electronic equipment 2 is reduced as much as possible.
In the description herein, references to the description of the terms "one embodiment," "certain embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processing module-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a random access storage media 22(RAM), a read-only storage media 22(ROM), an erasable programmable read-only storage media 22(EPROM or flash memory media 22), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in the computer storage medium 22.
It should be understood that portions of embodiments of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in the storage medium 22 and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. An information protection method for protecting information displayed on a display screen from being peeped by an unauthorized person, the information protection method comprising the steps of:
acquiring three-dimensional data of the face of a viewer;
judging whether the viewer has peeping risk according to the acquired three-dimensional data of the face of the viewer;
after judging that the viewer has the peeping risk, identifying whether the viewer is a preset authorizer by matching the difference between the three-dimensional face data of the viewer and a preset authorizer identity characteristic template; and
performing an information protection operation upon identifying that the viewer at risk of peeping is an unauthorized person.
2. The information protection method according to claim 1, wherein if the distance between the face of the viewer and the display screen is smaller than a preset peeping distance threshold, it is determined that the viewer has a peeping risk; or
If the visible area of the viewer is overlapped with the preset key area of the display screen, judging that the viewer has peeping risk; or
If the distance between the face of the viewer and the display screen is smaller than a preset peeping distance threshold value and the visible area of the viewer is overlapped with a preset key area of the display screen; then the viewer is judged to be at risk of peeping.
3. The information protection method according to claim 2, characterized in that: the method for calculating the visual area of the viewer comprises the following steps:
taking the face orientation of the viewer as a center, deviating the set eye visual angle threshold value from the face orientation along a preset direction to construct an eye visual angle range of the viewer; and
and calculating the area covered by the eye visual angle range of the viewer on the plane of the display screen as the visual area of the viewer at the current position and with the face facing downwards.
4. The information protection method according to claim 3, characterized in that: the method of calculating the orientation of the viewer's face comprises the steps of:
extracting three-dimensional data of the face feature points of the viewer;
connecting the extracted facial feature points to construct a corresponding facial reference plane; and
and calculating a vertical vector of the reference face of the face as a vector of the orientation of the face.
5. The information protection method according to claim 1, characterized in that: the method for acquiring the three-dimensional face data of the viewer is based on one or more of a structural light sensing principle, a flight time sensing principle and a binocular vision sensing principle.
6. The information protection method according to claim 1, characterized in that: the information protection operation comprises closing the display screen, popping up a peep-guarding-free text prompt, changing the brightness of the display screen, sending a peep-guarding-free sound prompt, recording relevant evidence of peeping risk and automatically alarming.
7. The information protection method according to any one of claims 1 to 6, characterized by: before acquiring the three-dimensional data of the face of the viewer, the method further comprises the following steps:
acquiring environment information of an environment where a display screen is located and/or state information of the display screen;
and executing the information protection method according to the comparison result of the acquired environment information and/or state information and preset induction reference information.
8. The information protection method according to claim 7, characterized in that: the induction reference information comprises an audio characteristic template, a proximity distance threshold, a human face number threshold and an acceleration change threshold, the acquired environmental sound information is compared with the audio characteristic template obtained by training in a specific scene in advance, and the information protection method is executed when the display screen is determined to be in the preset specific scene after comparison;
or, obtaining a distance between a viewer and a display screen, and executing the information protection method when the distance between the viewer and the display screen is smaller than a preset approach distance threshold;
or acquiring image information in front of a display screen, and executing the information protection method when the number of human faces appearing in the image in front of the display screen exceeds a preset human face number threshold value;
or the automatic sensor acquires the acceleration change condition of the display screen, and the information protection method is executed when the sensed acceleration change amplitude exceeds a preset acceleration change threshold value.
9. The information protection method according to claim 3, characterized in that: the eye viewable angle threshold may be 45 degrees, 50 degrees, 60 degrees, 70 degrees, or 80 degrees.
10. A storage medium storing program code executable by one or more processors, characterized in that: the program is executable by the processor to implement the information protection method of any one of claims 1 to 9.
CN202011633542.4A 2020-12-31 2020-12-31 Information protection method and storage medium Pending CN112632510A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011633542.4A CN112632510A (en) 2020-12-31 2020-12-31 Information protection method and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011633542.4A CN112632510A (en) 2020-12-31 2020-12-31 Information protection method and storage medium

Publications (1)

Publication Number Publication Date
CN112632510A true CN112632510A (en) 2021-04-09

Family

ID=75289895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011633542.4A Pending CN112632510A (en) 2020-12-31 2020-12-31 Information protection method and storage medium

Country Status (1)

Country Link
CN (1) CN112632510A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294925A (en) * 2022-09-16 2022-11-04 浙江亿洲电子科技有限公司 LED display screen control system for privacy protection according to environment detection
CN116523241A (en) * 2023-05-05 2023-08-01 江苏鑫翊翔智能化工程有限公司 Digital service management system and method based on artificial intelligence

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108307031A (en) * 2017-08-08 2018-07-20 腾讯科技(深圳)有限公司 Screen processing method, apparatus and storage medium
CN109002736A (en) * 2018-06-28 2018-12-14 深圳市口袋网络科技有限公司 A kind of method, apparatus and computer readable storage medium of peep-proof screen
CN109033901A (en) * 2018-08-01 2018-12-18 平安科技(深圳)有限公司 Glance prevention method, device, computer equipment and the storage medium of intelligent terminal
CN109543473A (en) * 2018-11-13 2019-03-29 Oppo(重庆)智能科技有限公司 Glance prevention method, device, terminal and the storage medium of terminal
CN110955912A (en) * 2019-10-29 2020-04-03 平安科技(深圳)有限公司 Privacy protection method, device and equipment based on image recognition and storage medium thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108307031A (en) * 2017-08-08 2018-07-20 腾讯科技(深圳)有限公司 Screen processing method, apparatus and storage medium
CN109002736A (en) * 2018-06-28 2018-12-14 深圳市口袋网络科技有限公司 A kind of method, apparatus and computer readable storage medium of peep-proof screen
CN109033901A (en) * 2018-08-01 2018-12-18 平安科技(深圳)有限公司 Glance prevention method, device, computer equipment and the storage medium of intelligent terminal
CN109543473A (en) * 2018-11-13 2019-03-29 Oppo(重庆)智能科技有限公司 Glance prevention method, device, terminal and the storage medium of terminal
CN110955912A (en) * 2019-10-29 2020-04-03 平安科技(深圳)有限公司 Privacy protection method, device and equipment based on image recognition and storage medium thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294925A (en) * 2022-09-16 2022-11-04 浙江亿洲电子科技有限公司 LED display screen control system for privacy protection according to environment detection
CN116523241A (en) * 2023-05-05 2023-08-01 江苏鑫翊翔智能化工程有限公司 Digital service management system and method based on artificial intelligence
CN116523241B (en) * 2023-05-05 2023-11-24 中清软(北京)科技有限公司 Digital service management system and method based on artificial intelligence

Similar Documents

Publication Publication Date Title
EP3872689B1 (en) Liveness detection method and device, electronic apparatus, storage medium and related system using the liveness detection method
CN108563936B (en) Task execution method, terminal device and computer-readable storage medium
WO2020135096A1 (en) Method and device for determining operation based on facial expression groups, and electronic device
KR20200062284A (en) Vehicle and vehicle door unlock control method, device and vehicle door unlock system
CN108446638B (en) Identity authentication method and device, storage medium and electronic equipment
CN102129554B (en) Method for controlling password input based on eye-gaze tracking
US11721087B2 (en) Living body detection method and apparatus, electronic device, storage medium, and related system to which living body detection method is applied
CN105718863A (en) Living-person face detection method, device and system
CN109948586B (en) Face verification method, device, equipment and storage medium
CN112784323B (en) Information protection device and electronic equipment
WO2020024416A1 (en) Anti-peep method and apparatus for smart terminal, computer device and storage medium
CN108875468B (en) Living body detection method, living body detection system, and storage medium
CN112632510A (en) Information protection method and storage medium
US11620860B2 (en) Spoofing detection apparatus, spoofing detection method, and computer-readable recording medium
CN212160784U (en) Identity recognition device and entrance guard equipment
KR102495796B1 (en) A method for biometric authenticating using a plurality of camera with different field of view and an electronic apparatus thereof
CN109525837B (en) Image generation method and mobile terminal
CN111368811A (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN111429599A (en) Attendance machine based on face recognition and body temperature measurement and attendance detection method
CN108629278B (en) System and method for realizing information safety display based on depth camera
CN104796539A (en) Terminal state control method
CN111695509A (en) Identity authentication method, identity authentication device, machine readable medium and equipment
CN103745199A (en) Risk prevention financial self-help acceptance device and method on basis of face recognition technology
CN111223219A (en) Identity recognition method and storage medium
CN111063085A (en) Identity recognition device and entrance guard equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210409

RJ01 Rejection of invention patent application after publication