WO2021035575A1 - 支付过程中防止偷窥的方法及电子设备 - Google Patents

支付过程中防止偷窥的方法及电子设备 Download PDF

Info

Publication number
WO2021035575A1
WO2021035575A1 PCT/CN2019/103066 CN2019103066W WO2021035575A1 WO 2021035575 A1 WO2021035575 A1 WO 2021035575A1 CN 2019103066 W CN2019103066 W CN 2019103066W WO 2021035575 A1 WO2021035575 A1 WO 2021035575A1
Authority
WO
WIPO (PCT)
Prior art keywords
payment
pupils
payment interface
electronic device
pupil
Prior art date
Application number
PCT/CN2019/103066
Other languages
English (en)
French (fr)
Inventor
艾静雅
柳彤
朱大卫
汤慧秀
Original Assignee
深圳海付移通科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳海付移通科技有限公司 filed Critical 深圳海付移通科技有限公司
Priority to PCT/CN2019/103066 priority Critical patent/WO2021035575A1/zh
Priority to CN201980010281.XA priority patent/CN111801700B/zh
Publication of WO2021035575A1 publication Critical patent/WO2021035575A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24317Piecewise classification, i.e. whereby each classification requires several discriminant rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/84Protecting input, output or interconnection devices output devices, e.g. displays or monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/446Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering using Haar-like filters, e.g. using integral image techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Definitions

  • This application relates to the field of mobile payment technology, and in particular to a method and electronic equipment for preventing peeping during the payment process.
  • NFC NearField Communication, near field communication module
  • NFC NearField Communication, near field communication module
  • the relevant technology has at least the following problems: when users make payments in shopping malls, stores, etc., where there are many people and the payment environment is complex, it is easy to cause strangers to peep and the user’s property is not safe. To very good protection.
  • embodiments of the present application provide a method and an electronic device for preventing peeping from strangers during the payment process and improving payment security.
  • a method for preventing peeping during a payment process, applied to an electronic device characterized in that the method includes:
  • the camera is enabled for pupil detection and eye tracking during the payment process
  • the number of pupils and the position of the pupils in the sensing range are determined according to
  • the number of people who are determined to watch the payment interface of the electronic device includes:
  • the distance is within the threshold range, it is determined whether the pupils at both ends of the distance focus on the payment interface; if so, the number of people watching the payment interface is increased by one.
  • the determining the number of persons gazing at the payment interface of the electronic device according to the number of pupils and the positions of the pupils in the sensing range further includes:
  • the distance between one pupil and the remaining pupils in the sensing range of the camera is not within the threshold range, it is determined whether the one pupil is focused on the payment interface; if so, the number of people watching the payment interface One more person.
  • the electronic device displays the payment interface through a display screen, a film material for realizing a directional light source is arranged outside the display screen, and the film material divides the sensing range into a visible area and an interference area ;
  • the method also includes:
  • the content of the payment interface is rearranged so that the image is clearly visible when the payment interface is gaze at the visible area, and image crosstalk occurs when the payment interface is gaze at the interference area.
  • the occurrence of image crosstalk when watching the payment interface in the interference area includes:
  • the pixel arrangement of the payment interface is changed, and the images entering the left eye and the right eye are pointed to different positions, so that image crosstalk appears on the payment interface.
  • the visible area extends outward from the display screen, and the interference area surrounds the visible area.
  • the film material is a columnar prism film.
  • the suspension of the payment process includes:
  • the face recognition function, fingerprint input function or password input function on the electronic device is invalidated.
  • an electronic device includes: at least one processor; and
  • the device can be used to implement the method of preventing peeping during the payment process as described above.
  • the non-transitory computer-readable storage medium includes: the non-transitory computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used to make a computer execute the method as described above.
  • the method for preventing peeping during the payment process can firstly enable the camera to perform pupil detection and eye tracking during the payment process, and then extract the number of pupils and pupils within the sensing range of the camera. Position, and then determine the number of people looking at the payment interface of the electronic device according to the number of pupils in the sensing range and the position of the pupil, and then if it is determined that the number of people looking at the payment interface is two or more, a prompt and / Or suspend the payment process, so that when the user is in a shopping mall, store, etc., with a large number of people and a complicated payment environment, it prevents strangers from peeping, improves the security of the payment operation, and guarantees the user's property safety.
  • Figure 1 is a schematic diagram of an application environment of an embodiment of the application
  • FIG. 2 is a schematic flowchart of a method for preventing peeping during a payment process provided by an embodiment of the application
  • FIG. 3 is a schematic flowchart of one embodiment of S30 in FIG. 2;
  • FIG. 4 is a schematic flowchart of another embodiment of S30 in FIG. 2;
  • FIG. 5 is a structural block diagram of a peeping prevention device provided by an embodiment of the application.
  • Fig. 6 is a structural block diagram of an electronic device provided by an embodiment of the application.
  • the embodiment of the present application provides a method for preventing peeping during the payment process, which is applied to an electronic device.
  • the method can first enable a camera during the payment process to perform pupil detection and eye tracking, and then extract the sensing range of the camera
  • the number of pupils and the position of the pupils, and then the number of pupils in the sensing range and the position of the pupils are used to determine the number of people looking at the payment interface of the electronic device, and then if it is determined that the number of people looking at the payment interface is two or
  • the above prompts and/or suspends the payment process, so that when the user is in a shopping mall, store, etc., with a large number of people and a complex payment environment, it prevents strangers from peeping, improves the security of the payment operation, and makes the user’s property safe. Good protection.
  • the following examples illustrate the application environment of the method for preventing peeping in the payment process.
  • FIG. 1 is a schematic diagram of an application environment of a method for preventing peeping during a payment process provided by an embodiment of the present application; as shown in FIG. 1, the application scenario includes an electronic device 10, a wireless network 20, a payment terminal 30 and a user 40.
  • the user 40 can operate the electronic device 10 through the wireless network 20 to interact with the payment terminal 30 to complete the corresponding payment operation.
  • the electronic device 10 may be of any type, such as a smart phone, a smart bracelet, a smart watch, or a tablet computer.
  • the electronic device 10 may be equipped with one or more different user 40 interaction devices to collect instructions from the user 40 or display and feedback information to the user 40.
  • buttons, display screens, touch screens, speakers, and remote control joysticks include but are not limited to: buttons, display screens, touch screens, speakers, and remote control joysticks.
  • the electronic device 10 may be equipped with a touch-sensitive display screen, and the user 40 may use the touch-sensitive display screen to perform payment execution operations such as fingerprint input and password input.
  • the electronic device 10 may be equipped with a camera device, and the user 40 can use the camera device to perform payment operations such as face recognition and lip language recognition.
  • the camera device further includes a camera, and the camera can be used for pupil detection and eye tracking.
  • the wireless network 20 may be a wireless communication network based on any type of data transmission principle and used to establish a data transmission channel between two nodes, such as a Bluetooth network, a WiFi network, a wireless cellular network, or a combination thereof located in different signal frequency bands. .
  • the payment terminal 30 can be a shopping mall self-service charging device, a bank self-service charging system, etc.
  • the payment terminal 30 has a display screen, and the user 40 can use the electronic device 10 to scan the payment QR code displayed on the display screen. Way to implement payment operations.
  • NFC Near Field Communication, near field communication module
  • NFC Near Field Communication, near field communication module
  • Fig. 2 is an embodiment of a method for preventing peeping during a payment process provided by an embodiment of the application. As shown in Figure 2, the method for preventing peeping during the payment process includes the following steps:
  • the payment process refers to the user's payment execution operations through fingerprint input, password input, and face recognition.
  • the eye tracking technology refers to the tracking of eye movement by measuring the position of the eye's gaze point or the movement of the eye relative to the head.
  • the image information in front of the display screen can be acquired through a face image acquisition device.
  • pupil detection is performed on the image information through the Harr feature classification and recognition algorithm.
  • the Harr feature classification and recognition algorithm uses the sum or difference threshold of the rectangular image area, extracts the Harr-like wavelet feature to generate the corresponding weak classifier, introduces the Adaboost algorithm to select the optimal weak classifier, and optimizes it for training. It becomes a strong classifier, and finally generates a multi-layer cascaded strong classifier through screening cascade of several strong classifiers.
  • the face and eye classifiers using Harr feature classification and recognition algorithm have the characteristics of fast recognition speed, high target recognition rate and low non-target false acceptance rate.
  • S20 Extract the number of pupils and pupil positions within the sensing range of the camera.
  • the sensing range of the camera is the monitoring range or shooting range of the camera.
  • the sensing range of the camera can be calculated according to the image sensor model of the camera, the focal length of the camera, and the viewing angle range of the camera. range.
  • the eye tracking calibration data can be obtained through the above-mentioned eye tracking technology, and the number and positions of the pupils can be obtained according to the eye tracking calibration data.
  • the eye-tracking calibration data is data in a preset eye-movement calibration model, and the eye-tracking calibration data can be calibrated in real time based on electronic device state information and pupil state information.
  • the terminal status information can be detected by a built-in sensor of the electronic device or the application software of the electronic device 10.
  • the pupil state information includes, but is not limited to: the number of pupils, pupil position, fixation point, fixation time, fixation times, saccade distance or pupil size.
  • the face image can be acquired in real time by the face acquisition device of the electronic device 10, the eye image is detected through the face image recognition model, and then the eye image is processed by gray value, marginalization, etc. In order to get the number and position of pupils.
  • the face acquisition device is provided in the electronic device 10 as a front camera of the electronic device 10 for acquiring image information in front of the display screen.
  • the face image acquisition device includes an image acquisition unit and a first communication unit. Bus, sensor unit, second communication bus, control processing unit and face detection and recognition unit.
  • the image acquisition unit is used to acquire an image of the target object in the target area.
  • the target area refers to the image acquisition area of the image acquisition unit, or the face recognition area used for face acquisition/recognition.
  • the target object is, for example, a person whose face recognition is to be performed.
  • the image acquisition unit may be a camera, various image acquisition terminals, or a camera unit or camera of an electronic device such as a tablet.
  • the image acquisition unit is connected to the control processing unit through the first communication bus, so as to send the acquired image to the control processing unit for processing, or to accept commands or signals from the control processing unit.
  • the image acquisition unit may be a digital camera.
  • the digital camera includes a device body, a base, a support rod, a telescopic rod, an aperture, a housing, a detachment board, a signal line, a USB port, and a protective ring, and the bottom of the device body
  • a base is provided, the upper position of the base is provided with a support rod, the upper position of the support rod is provided with a telescopic rod, the upper position of the telescopic rod is provided with a housing, and the right side of the housing is provided with a supplementary aperture,
  • the connection method between the iris of the household and the housing is rotary connection, the left side of the housing is provided with a detachment plate, the left middle position of the detachment plate is connected to the USB port through a signal line, and the right side of the signal line
  • the side is connected with a chip inside the device body, the right side of the chip is connected with a signal converter, and the right side of the signal
  • a protective ring is arranged inside the supplementary aperture, the connection between the protective ring and the supplementary aperture is fixed connection, and the material of the protective ring is high-strength pressure-resistant glass, which facilitates the supplementary light of the supplementary aperture and improves The performance of the main body of the device prevents the wear of the 4 CMOS lens and improves the service life of the main body of the device.
  • the pupil position may be the pupil position coordinates of the pupil in the coordinate system, and may also be the angle from the preset reference point and the length from the reference point.
  • the camera device of the electronic device 10 obtains the image information in front of the display screen within the sensing range, and can obtain the number of pupils on the graphic information and the pupil position information corresponding to the pupils every day according to the image information. If there are multiple pupils, the distance between every two pupils and the position coordinates of each pupil in the graphic information or the distance between each pupil from the graphic reference point and the distance between each pupil and the reference point can be calculated. angle.
  • S30 Determine the number of people who are gazing at the payment interface of the electronic device according to the number of pupils and the position of the pupils in the sensing range.
  • the payment interface refers to all interfaces that appear on the electronic device during the payment process of the user through fingerprint input, password input, and face recognition.
  • the number of pupils in the sensing range of the camera can be used to determine the number of people appearing in the sensing range, and the position of the pupils may be further used to filter out the people appearing in the sensing range looking at the payment interface of the electronic device.
  • the payment information of the user 40 may be affected. Stealing, and then generating warning information, used to remind the user 40 to pay attention to the surrounding situation.
  • the warning information may be "There is a security risk in continuing to pay, please pay attention to the surrounding conditions", etc., or the relevant payment function of the electronic device 10 is invalidated, and the payment process is suspended.
  • the embodiment of the application provides a method for preventing peeping during the payment process.
  • the method may first enable a camera during the payment process to perform pupil detection and eye tracking, and then extract the number of pupils and pupil positions within the sensing range of the camera , And then determine the number of people looking at the payment interface of the electronic device according to the number of pupils in the sensing range and the position of the pupil, and then if it is determined that the number of people looking at the payment interface is two or more, a prompt and/ Or suspend the payment process, so that when the user is in a shopping mall, store, etc., where there are many people and the payment environment is complicated, it prevents strangers from peeping, improves the security of the payment operation, and guarantees the user's property safety.
  • S30 includes the following steps:
  • pupil detection is performed on the image information through the Harr feature classification and recognition algorithm, and the positions of each pupil within the sensing range are obtained. Then, the distance between each pupil is calculated according to the pupil position.
  • Harr feature classification and recognition algorithm uses the sum or difference threshold of the rectangular image area to extract the Harr-like wavelet feature to generate the corresponding weak classifier, and the Adaboost algorithm is introduced to select the optimal weak classifier and optimize it Jiehe is trained as a strong classifier, and a multi-layer cascaded strong classifier is finally generated by filtering and cascading several strong classifiers.
  • the face and eye classifiers using Harr feature classification and recognition algorithm have the characteristics of fast recognition speed, high target recognition rate and low non-target false acceptance rate.
  • the threshold range is 55mm-65mm, and if the distance (spacing) between the two pupils is 58mm, it is determined that the distance between the two pupils is within the threshold Within range.
  • the payment interface refers to all interfaces that appear on the electronic device during the payment process of the user through fingerprint input, password input, and face recognition.
  • the judging whether the pupils at both ends of the distance are focused on the payment interface can first use the USB camera to collect dynamic image information in real time as input, extract the image information of each frame in the video, and use the classification recognition learning algorithm based on Harr features Detect the position of the face and the eye in the image information of the positioning frame, extract the area of interest of the human eye, and use the improved SUSAN corner detection algorithm to locate the coordinates of the outer corner of the eye on this area, and track the human eye pupil in real time through the target tracking technology Motion trajectory, using the pupil center position and the human eye corner point coordinates as the line of sight characteristic parameters to construct the line of sight direction calculation model, determine the focus area of the pupils at both ends of the spacing, and then determine the pupils at both ends of the spacing based on the focus area of the pupils Whether to focus on the payment interface.
  • the use of the Harr feature-based classification recognition learning algorithm to detect the position of the face and the eye in the positioning frame image information, and extracting the focus area of the pupil includes the following steps:
  • step (1) the specific steps of performing human eye detection on image information are as follows:
  • V R1-R2 (1)
  • V is the calculated value of the rectangular feature in the image area
  • R1 and R2 represent the white area feature and the black area feature in the rectangular feature, respectively
  • the calculation formula of Ri is:
  • r(m,n) is the value of the pixel in the white area or the black area, and m and n represent the vertical and horizontal coordinates of the rectangular feature area, respectively;
  • E(A), E(B), E(C), E(D) respectively represent the upper left point, upper right point, lower left point, lower right point and area table of the requested area;
  • the spatial information of the user 40 is determined by the positions of the head support and the camera, and the specific steps include the following:
  • Fixing the camera device beside the display screen of the mobile terminal 10 can clearly capture the position of the face of the user 40.
  • the camera device is fixed above the display screen of the mobile terminal 10;
  • the user 40 keeps his head still, and only turns his eyes to observe randomly generated points on the computer display screen, and records the coordinates of each point and the coordinates of the pupil.
  • the specific steps of calculating the area of interest of the user 40 include the following:
  • the matching formula is:
  • S(x,y) is the coordinates of a point on the display screen
  • P(x,y) is the pupil coordinates
  • Sw is the width of the computer display screen
  • Sl and Sr are the coordinates of the left and right directions when the user 40 observes the display screen.
  • the coordinates of the screen corresponding to the pupil are calculated from the coordinate data recorded in formula (13) and step (2.4), that is, the value of S(x,y) is obtained.
  • S30 further includes the following steps:
  • the threshold range is 55mm-65mm
  • the distance between one pupil and the other two pupils is 40mm and 70mm
  • the payment interface refers to all interfaces that appear on the electronic device during the payment process of the user through fingerprint input, password input, and face recognition.
  • the judging whether the pupils at both ends of the distance are focused on the payment interface can first use the USB camera to collect dynamic image information in real time as input, extract the image information of each frame in the video, and use the classification recognition learning algorithm based on Harr features Detect the position of the face and the eye in the image information of the positioning frame, extract the area of interest of the human eye, and use the improved SUSAN corner detection algorithm to locate the coordinates of the outer corner of the eye on this area, and track the human eye pupil in real time through the target tracking technology Motion trajectory, using the pupil center position and the human eye corner point coordinates as the line of sight characteristic parameters to construct the line of sight direction calculation model, determine the focus area of the pupils at both ends of the spacing, and then determine the pupils at both ends of the spacing based on the focus area of the pupils Whether to focus on the payment interface.
  • the method when the user is making a payment, in order to avoid peeping by strangers and better protect the user's property security, after performing corresponding security operations according to the payment environment information, the method further includes:
  • the content of the payment interface is rearranged so that the image is clearly visible when the payment interface is gaze at the visible area, and image crosstalk occurs when the payment interface is gaze at the interference area.
  • the electronic device displays the payment interface through a display screen, and a film material for realizing a directional light source is arranged outside the display screen, and the film material divides the sensing range into a visible area and an interference area.
  • the visible area extends outward from the display screen, and the interference area surrounds the visible area.
  • the images entering the left eye and the right eye are pointed to different positions, so that image crosstalk occurs in the payment interface, so that the payment interface can be watched in the visible area.
  • the image is clearly visible at the interface, and the image is blurred or invisible when looking at the payment interface in the interference area.
  • the display screen is a display screen with a self-luminous display unit, such as an OLED display screen or a micro-LED (Micro-LED) display screen.
  • a self-luminous display unit such as an OLED display screen or a micro-LED (Micro-LED) display screen.
  • the display screen has a certain viewing angle.
  • the electronic device 10 is observed from a direction perpendicular to the display plane (visible area), and the brightness is relatively high; but from a certain angle deviated from the normal line (interference area), the prism film can protect the payment
  • the image of the interface is rearranged so that the image in the interference area forms a dark bar and a part of the area will produce ghost images, resulting in severe field loss and distortion, making the image invisible in the interference area.
  • the film material is a columnar prism film.
  • the columnar prism film includes a first light-transmitting substrate and a plurality of prism structures.
  • the prism structures are disposed on the surface of the first light-transmitting substrate away from the first diffusion layer, and the prism structures receive the image beam from the projector and Guide the image beam to pass along the first direction.
  • the material of the first transparent substrate may be polyethylene terephthalate (PET) or other transparent materials. Diffusing particles can be arranged in each prism structure to diffuse the image beam.
  • the prism structure of this embodiment is, for example, triangular poles, the prism structures are arranged parallel to each other along the second direction, and each prism structure extends along the third direction, that is, the long axis of each prism structure is parallel to the third direction.
  • the first direction, the second direction, and the third direction are, for example, perpendicular to each other.
  • the present application does not limit the specific shape and arrangement of the prism structure.
  • the prism structure is, for example, a Fresnel lens structure distributed in concentric circles.
  • the embodiments of the present application provide a peeping prevention device 50.
  • the peeping prevention device 50 includes: a pupil tracking module 51, a pupil information extraction module 52, a number calculation module 53 and a secure payment module 54.
  • the pupil tracking module 51 is used to enable a camera to perform pupil detection and eye tracking during the payment process.
  • the pupil information extraction module 52 is used to extract the number of pupils and pupil positions within the sensing range of the camera.
  • the number calculation module 53 is configured to determine the number of people gazing at the payment interface of the electronic device according to the number of pupils and the positions of the pupils in the sensing range.
  • the safe payment module 54 is used for prompting and/or suspending the payment process if it is determined that the number of people watching the payment interface is two or more.
  • the peeping prevention device 50 first activates the camera during the payment process to perform pupil detection and eye tracking, and then extracts the number of pupils and pupil positions in the sensing range of the camera, and then according to the number of pupils in the sensing range and The pupil position determines the number of people watching the payment interface of the electronic device, and then if it is determined that the number of people watching the payment interface is two or more, prompting and/or suspending the payment process, so that when the user is in a shopping mall or a store In an environment with a large number of people and a complex payment environment, it prevents strangers from peeping, improves the security of payment operations, and ensures the safety of users’ property.
  • the number calculation module 53 includes an interpupillary distance calculation unit and a number judgment unit;
  • the interpupillary distance calculation unit is used to calculate the distance between each pupil if the number of pupils in the sensing range of the camera is extracted to be two or more.
  • the number determining unit is configured to determine whether the pupils at both ends of the distance focus on the payment interface if the distance is within a threshold range; if so, the number of people watching the payment interface increases by one.
  • the number determining unit is further configured to determine whether the pupil is focused on the payment interface if the distance between one of the pupils in the sensing range of the camera and the remaining pupils is not within the threshold range; if so, look at The number of people on the payment interface increases by one.
  • the peeping prevention device 50 further includes an image rearrangement module, which is used to rearrange the content of the payment interface, so that the payment interface can be watched in the visible area.
  • the image is clearly visible at the moment, and image crosstalk occurs when watching the payment interface in the interference area.
  • the images entering the left eye and the right eye are directed to different positions, so that image crosstalk occurs in the payment interface.
  • the electronic device displays the payment interface through a display screen, and a film material for realizing a directional light source is arranged outside the display screen, and the film material divides the sensing range into a visible area and an interference area.
  • the visible area extends outward from the display screen, and the interference area surrounds the visible area.
  • the film material is a columnar prism film.
  • the above-mentioned peeping prevention device can execute the method for peeping prevention in the payment process provided in the embodiments of the present application, and has the corresponding functional modules and beneficial effects of the execution method.
  • the method for peeping prevention in the payment process provided in the embodiment of the present application please refer to the method for peeping prevention in the payment process provided in the embodiment of the present application.
  • FIG. 6 is a structural block diagram of an electronic device 100 provided by an embodiment of the application.
  • the electronic device 100 can be used to realize the functions of all or part of the functional modules in the main control chip.
  • the electronic device 100 may include: a prism film, a processor 110, a memory 120, and a communication module 130.
  • the prism film includes a first light-transmitting substrate and a plurality of prism structures.
  • the prism structures are disposed on the surface of the first light-transmitting substrate away from the first diffusion layer, and the prism structures receive the image light beam from the projector and guide it.
  • the image light beam is guided along the first direction.
  • the material of the first transparent substrate may be polyethylene terephthalate (PET) or other transparent materials. Diffusing particles can be arranged in each prism structure to diffuse the image beam.
  • PET polyethylene terephthalate
  • the prism structure of this embodiment is, for example, triangular poles, the prism structures are arranged parallel to each other along the second direction, and each prism structure extends along the third direction, that is, the long axis of each prism structure is parallel to the third direction.
  • the first direction, the second direction, and the third direction are, for example, perpendicular to each other.
  • the present application does not limit the specific shape and arrangement of the prism structure.
  • the prism structure is, for example, a Fresnel lens structure distributed in concentric circles.
  • the processor 110, the memory 120, and the communication module 130 establish a communication connection between any two through a bus.
  • the processor 110 may be of any type, and has one or more processing cores. It can perform single-threaded or multi-threaded operations, and is used to parse instructions to perform operations such as obtaining data, performing logical operation functions, and issuing operation processing results.
  • the memory 120 can be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, as corresponding to the method for preventing peeping in the payment process in the embodiment of the present application
  • Program instructions/modules for example, the pupil tracking module 51, the pupil information extraction module 52, the number calculation module 53, and the safe payment module 54 shown in FIG. 5
  • the processor 110 executes various functional applications and data processing of the peeping prevention device 50 by running the non-transitory software programs, instructions, and modules stored in the memory 120, that is, to prevent peeping during the payment process in any of the foregoing method embodiments Methods.
  • the memory 120 may include a storage program area and a storage data area.
  • the storage program area may store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the peeping prevention device 50 and the like.
  • the memory 120 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices.
  • the memory 120 may optionally include memories remotely provided with respect to the processor 110, and these remote memories may be connected to the electronic device 100 via a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the memory 120 stores instructions that can be executed by the at least one processor 110; the at least one processor 110 is used to execute the instructions to implement the method for preventing peeping during the payment process in any of the foregoing method embodiments, for example, , Execute the method steps 10, 20, 30, 40, etc. described above, to realize the functions of the modules 51-54 in FIG. 5.
  • the communication module 130 is a functional module used to establish a communication connection and provide a physical channel.
  • the communication module 130 may be any type of wireless or wired communication module 130, including but not limited to a WiFi module or a Bluetooth module.
  • the embodiments of the present application also provide a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors 110 is executed, for example, executed by one of the processors 110 in FIG. 6, so that the one or more processors 110 may execute the method for preventing peeping during the payment process in any of the foregoing method embodiments, for example, execute the method step 10 described above , 20, 30, 40, etc., realize the functions of modules 51-54 in Figure 5.
  • the device embodiments described above are merely illustrative, where the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each implementation manner can be implemented by means of software plus a general hardware platform, and of course, it can also be implemented by hardware.
  • a person of ordinary skill in the art can understand that all or part of the processes in the methods of the foregoing embodiments can be implemented by instructing relevant hardware by a computer program in a computer program product.
  • the computer program can be stored in a non-transitory computer.
  • the computer program includes program instructions, and when the program instructions are executed by a related device, the related device can execute the flow of the foregoing method embodiments.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
  • the above products can implement the method for preventing peeping during the payment process provided by the embodiments of the present application, and have the corresponding functional modules and beneficial effects for executing the method for preventing peeping during the payment process.
  • the method for preventing peeping in the payment process provided in the embodiment of this application please refer to the method for preventing peeping in the payment process provided in the embodiment of this application.

Abstract

一种支付过程中防止偷窥的方法及电子设备(100),其中应用于电子设备(100)的支付过程中防止偷窥的方法包括:首先在支付过程中启用摄像头进行瞳孔检测与眼球追踪(S10),进而提取摄像头的感测范围内瞳孔数量与瞳孔位置(S20),然后根据感测范围内瞳孔数量与瞳孔位置确定注视电子设备的支付界面的人数(S30),进而若确定出注视支付界面的人数为两人或以上进行提示和/或中止支付过程(S40),从而当用户(40)在商场,门店等人员众多,支付环境复杂的环境下,防止陌生人偷窥,提高了支付操作的安全性,使用户的财产安全得到很好的保障。

Description

支付过程中防止偷窥的方法及电子设备 技术领域
本申请涉及移动支付技术领域,尤其涉及一种支付过程中防止偷窥的方法及电子设备。
背景技术
随着电子设备技术的不断发展,智能手机、智能手环、智能手表等电子设备被加入了各种各样的便捷功能,移动支付就是其中一项较为实用的功能应用。在没有显示屏的电子设备上,一般采用NFC(NearField Communication,近场通信模块)来实现支付操作,而在设置有显示屏的电子设备上,不仅可以通过NFC来实现,还可以通过扫描或者显示支付二维码的方式来实现支付操作。
在实现本申请的过程中,申请人发现相关技术至少存在以下问题:当用户在商场,门店等人员众多,支付环境复杂的环境下进行支付时,容易造成陌生人偷窥,用户的财产安全得不到很好的保障。
发明内容
为了解决上述技术问题,本申请实施例提供一种在用户在支付过程中防止陌生人偷窥,提高支付安全性的支付过程中防止偷窥的方法及电子设备。
为解决上述技术问题,本申请实施例提供以下技术方案:一种支付过程中防止偷窥的方法,应用于电子设备,其特征在于,所述方法包括:
支付过程中启用摄像头进行瞳孔检测与眼球追踪;
提取所述摄像头的感测范围内瞳孔数量与瞳孔位置;
根据所述感测范围内所述瞳孔数量与所述瞳孔位置确定注视所述电子设备的支付界面的人数;
若确定出注视所述支付界面的人数为两人或以上进行提示和/或中止所述支付过程。
可选地,所述根据所述感测范围内所述瞳孔数量与所述瞳孔位置确
定注视所述电子设备的支付界面的人数,包括:
若提取出所述摄像头的感测范围内所述瞳孔数量在两个或以上,计
算各个瞳孔间的间距;
若所述间距在阈值范围内,判断所述间距两端的瞳孔是否聚焦所述支付界面;若是,则注视所述支付界面的人数增加1人。
可选地,所述根据所述感测范围内所述瞳孔数量与所述瞳孔位置确定注视所述电子设备的支付界面的人数,还包括:
若所述摄像头的感测范围内的其中一个瞳孔与其余瞳孔的间距均不在所述阈值范围内,判断所述其中一个瞳孔是否聚焦于所述支付界面;若是,则注视所述支付界面的人数增加1人。
可选地,所述电子设备通过显示屏显示所述支付界面,所述显示屏外设置有实现指向性光源的膜材,所述膜材将所述感测范围分为可视区域和干扰区域;所述方法还包括:
对所述支付界面的内容进行重排,使得在所述可视区域注视所述支付界面时图像清晰可见,在所述干扰区域注视所述支付界面时出现图像串扰。
可选地,所述在所述干扰区域注视所述支付界面时出现图像串扰,包括:
更改所述支付界面的像素排布,将进入左眼和右眼的图像指向不同的位置,以使所述支付界面出现图像串扰。
可选地,所述可视区域由所述显示屏向外延伸,所述干扰区域环绕所述可视区域。
可选地,所述膜材为柱状棱镜膜。
可选地,所述中止所述支付过程,包括:
将所述电子设备上的人脸识别功能、指纹输入功能或密码输入功能作失效处理。
为解决上述技术问题,本申请实施例还提供以下技术方案:一种电子设备。所述电子设备包括:至少一个处理器;以及
与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够用于执行如上所述的支付过程中防止偷窥的方法。
为解决上述技术问题,本申请实施例还提供以下技术方案:一种非暂态计算机可读存储介质。所述非暂态计算机可读存储介质包括:所述非暂态计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行如上所述的方法。
与现有技术相比较,本申请实施例的提供支付过程中防止偷窥的方法可以通过首先在支付过程中启用摄像头进行瞳孔检测与眼球追踪,进而提取所述摄像头的感测范围内瞳孔数量与瞳孔位置,然后根据所述感测范围内所述瞳孔数量与所述瞳孔位置确定注视所述电子设备的支付界面的人数,进而若确定出注视所述支付界面的人数为两人或以上进行提示和/或中止所述支付过程,从而当用户在商场,门店等人员众多,支付环境复杂的环境下,防止陌生人偷窥,提高了支付操作的安全性,使用户的财产安全得到很好的保障。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1为本申请实施例的应用环境示意图;
图2为本申请实施例提供的支付过程中防止偷窥的方法的流程示意图;
图3是图2中S30其中一实施例的流程示意图;
图4是图2中S30另一实施例的流程示意图;
图5为本申请实施例提供的防止偷窥装置的结构框图;
图6为本申请实施例提供的电子设备的结构框图。
具体实施方式
为了便于理解本申请,下面结合附图和具体实施例,对本申请进行更详细的说明。需要说明的是,当元件被表述“固定于”另一个元件,它可以直接在另一个元件上、或者其间可以存在一个或多个居中的元件。当一个元件被表述“连接”另一个元件,它可以是直接连接到另一个元件、或者其间可以存在一个或多个居中的元件。本说明书所使用的术语“上”、“下”、“内”、“外”、“底部”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。此外,术语“第一”、“第二”“第三”等仅用于描述目的,而不能理解为指示或暗示相对重要性。
除非另有定义,本说明书所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本说明书中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是用于限制本申请。本说明书所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。
此外,下面所描述的本申请不同实施例中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。
本申请实施例提供了一种支付过程中防止偷窥的方法,应用于电子设备,所述方法可以通过首先在支付过程中启用摄像头进行瞳孔检测与眼球追踪,进而提取所述摄像头的感测范围内瞳孔数量与瞳孔位置,然后根据所述感测范围内所述瞳孔数量与所述瞳孔位置确定注视所述电子设备的支付界面的人数,进而若确定出注视所述支付界面的人数为两人或以上进行提示和/或中止所述支付过程,从而当用户在商场,门店等人员众多,支付环境复杂的环境下,防止陌生人偷窥,提高了支付操作的安全性,使用户的财产安全得到很好的保障。
以下举例说明所述支付过程中防止偷窥的方法的应用环境。
图1是本申请实施例提供的支付过程中防止偷窥的方法的应用环境的示意图;如图1所示,所述应用场景包括电子设备10、无线网络20、付费终端30及用户40。用户40可通过无线网络20操作所述电子设备10与付费终端30进行交互,以完成相应的支付操作。
所述电子设备10可以是任何类型,例如智能手机、智能手环、智能手表或者平板电脑等。该电子设备10可以装配有一种或者多种不同的用户40交互装置,用以采集用户40指令或者向用户40展示和反馈信息。
这些交互装置包括但不限于:按键、显示屏、触摸屏、扬声器以及遥控操作杆。例如,电子设备10可以装配有触控显示屏,用户40可利用所述触控显示屏进行指纹输入及密码输入等支付执行操作。又例如,电子设备10可以装配有摄像装置,用户40可利用所述摄像装置进行人脸识别及唇语识别等支付操作。同时,在本实施例中,所述摄像装置还包括摄像头,所述摄像头可以用于进行瞳孔检测与眼球追踪。
所述无线网络20可以是基于任何类型的数据传输原理,用于建立两个节点之间的数据传输信道的无线通信网络,例如位于不同信号频段的蓝牙网络、WiFi网络、无线蜂窝网络或者其结合。
所述付费终端30可为商场自助收费装置及银行自助收费系统等等,所述付费终端30具有显示显示屏,用户40可以使用电子设备10通过扫描所述显示显示屏显示的支付二维码的方式来实现支付操作。在没有所述显示显示屏的付费终端30上,可以采用NFC(NearField Communication,近场通信模块)来实现支付操作。
图2为本申请实施例提供的支付过程中防止偷窥的方法的实施例。如图2所示,该支付过程中防止偷窥的方法包括如下步骤:
S10、支付过程中启用摄像头进行瞳孔检测与眼球追踪。
其中,所述支付过程是指用户通过指纹输入、密码输入及人脸识别等支付执行操作。
具体地,所述眼球追踪技术是指通过测量眼睛的注视点的位置或者眼球相对头部的运动而实现对眼球运动的追踪。
具体的,可通过人脸图像采集装置获取所述显示屏前方的图形信息,在获取到所述图像信息之后,进一步地,通过Harr特征分类识别算法对所述图像信息进行瞳孔检测。所述Harr特征分类识别算法利用对矩形图像区域的和或差闽值化,提取类Harr小波特征生成对应的弱分类器,引入Adaboost算法选择最优的弱分类器,并将其优化姐合训练成强分类器,通过将若干个强分类器进行筛选式级联,最终生成多层级联强分类器。采用Harr特征分类识别算法的人脸和人眼分类器具有识别速度快,目标识别率高和非目标错误接受率低等特点。
S20、提取所述摄像头的感测范围内瞳孔数量与瞳孔位置。
其中,所述摄像头的感测范围即为所述摄像头监控范围或拍摄范围,具体地,可通过根据所述摄像头的图形传感器型号、摄像头焦距及摄像头的视角范围,计算得到所述摄像头的感测范围。
具体地,可通过上述眼球追踪技术获取眼动跟踪校准数据,根据眼动跟踪校准数据获取瞳孔的数量与位置。其中,眼动跟踪校准数据为预设的眼动校准模型中的数据,所述眼动跟踪校准数据可以实时根据电子设备状态信息和瞳孔状态信息进行校准。终端状态信息可以通过电子设备内置传感器或者电子设备10的应用软件进行检测得到。其中,瞳孔状态信息包括但不局限于:瞳孔数量、瞳孔位置、注视点,注视时间、注视次数、眼跳距离或瞳孔大小。
具体地,在一些实施例中,可通过电子设备10的人脸采集装置实时获取人脸图像,通过人脸图像识别模型检测到眼睛图像,再对眼睛图像进行灰度值、边缘化等处理,从而得到瞳孔的数量与位置。
所述人脸采集装置设置于电子设备10,作为所述电子设备10的前置摄像头,用于获取所述显示屏前方的图像信息,所述人脸图像采集装置包括图像采集单元、第一通信总线、传感器单元、第二通信总线、控制处理单元和人脸检测识别单元。图像采集单元用于采集目标区域内目标对象的图像。所述目标区域指的是图像采集单元的图像采集区域,或者用于进行人脸采集/识别的人脸识别区域。所述目标对象例如为待进行人脸识别的人。图像采集单元可以为摄像机、各种图像采集端,或者 诸如平板都电子设备的相机单元或摄像头。图像采集单元通过第一通信总线与控制处理单元连接,以将所采集的图像发送至控制处理单元进行处理,或者接受来自控制处理单元的命令或信号。
图像采集单元可为数字摄像头,具体地,所述数字摄像头包括装置本体、底座、支撑杆、伸缩杆、补光圈、外壳、拆卸板、信号线、USB端口和保护圈,所述装置本体的底部设有底座,所述底座的上部位置设有支撑杆,所述支撑杆的上部位置设有伸缩杆,所述伸缩杆的上部位置设有外壳,所述外壳的右侧位置设有补光圈,所户补光圈与外壳之间的连接方式为转动连接,所述外壳的左侧位置设有拆卸板,所述拆卸板的左侧中部位置通过信号线与USB端口连接,所述信号线的右侧与装置本体内部的芯片连接,所述芯片的右侧与信号转换器连接,所述信号转换器的右侧设有驱动装置。所述补光圈的内部设有保护圈,所述保护圈与补光圈之间的连接方式为固定连接,所述保护圈的材质为高强度抗压玻璃,方便了补光圈的补光,提高了装置本体的使用性能,同时防止了CMOS镜4头的磨损,提高了装置本体的使用寿命。
其中,瞳孔位置可为所述瞳孔在坐标系中的瞳孔位置坐标,也可为与预设基准点的角度和距离所述参考点的长度。例如,电子设备10的摄像装置在感测范围内获取到所述显示屏前方的图像信息,进而可根据所述图像信息得到所述图形信息上的瞳孔数量和每天瞳孔对应的瞳孔位置信息。若瞳孔数量有多个,即可计算出每两个瞳孔之间的间距和每个瞳孔在所述图形信息中的位置坐标或每个瞳孔距离图形基准点的距离及每个瞳孔与基准点的角度。
S30、根据所述感测范围内所述瞳孔数量与所述瞳孔位置确定注视所述电子设备的支付界面的人数。
其中,所述支付界面是指用户在通过指纹输入、密码输入及人脸识别等支付过程中电子设备上出现的所有界面。
具体地,可根据所述摄像头感测范围内的瞳孔数量确定感测范围内出现的人数,进一步根据所述瞳孔位置从在感测范围内出现的人中筛选出注视所述电子设备的支付界面的人数,
S40、若确定出注视所述支付界面的人数为两人或以上进行提示和/或中止所述支付过程。
可以理解的是,若确定出注视所述支付界面的人数为两人或以上,则表明当前用户40的支付环境不安全,继续进行支付操作可能存在潜在的危险,用户40的支付信息可能会被窃取,进而生成警示信息,用于提醒用户40注意周围情况。例如,所述警示信息可为“继续支付存在安全风险,请注意查看周围情况”等,或将电子设备10的相关支付功能作失效处理,进而中止所述支付过程。例如,将所述移动终端10的摄像装置锁定,使之无法开启摄像装置,进而无法使用人脸识别功能,或将移动终端10上的指纹输入区域进行锁定,使之无法感应用户40的指纹输入操作,或将移动终端10上的密码输入区域进行锁定,使用户40无法正常输入密码。
本申请实施例提供了一种支付过程中防止偷窥的方法,所述方法可以通过首先在支付过程中启用摄像头进行瞳孔检测与眼球追踪,进而提取所述摄像头的感测范围内瞳孔数量与瞳孔位置,然后根据所述感测范围内所述瞳孔数量与所述瞳孔位置确定注视所述电子设备的支付界面的人数,进而若确定出注视所述支付界面的人数为两人或以上进行提示和/或中止所述支付过程,从而当用户在商场,门店等人员众多,支付环境复杂的环境下,防止陌生人偷窥,提高了支付操作的安全性,使用户的财产安全得到很好的保障。
为了更好的根据所述感测范围内所述瞳孔数量与所述瞳孔位置确定注视所述电子设备的支付界面的人数,在一些实施例中,请参阅图3,S30包括如下步骤:
S31:若提取出所述摄像头的感测范围内的所述瞳孔数量在两个或以上,计算各个瞳孔间的间距。
具体地,通过Harr特征分类识别算法对所述图像信息进行瞳孔检测,进而得到在感测范围内的各个瞳孔位置。进而根据所述瞳孔位置计算出各个瞳孔间的间距。
具体地,所述Harr特征分类识别算法利用对矩形图像区域的和或 差闽值化,提取类Harr小波特征生成对应的弱分类器,引入Adaboost算法选择最优的弱分类器,并将其优化姐合训练成强分类器,通过将若干个强分类器进行筛选式级联,最终生成多层级联强分类器。采用Harr特征分类识别算法的人脸和人眼分类器具有识别速度快,目标识别率高和非目标错误接受率低等特点。
S32:若所述间距在阈值范围内,判断所述间距两端的瞳孔是否聚焦于所述支付界面;若是,则注视所述支付界面的人数增加1人。
具体地,例如,当所述瞳孔数量为2个,阈值范围为55mm-65mm,若两个所述瞳孔间的距离(间距)在为58mm,则确定两个所述瞳孔的间距在所述阈值范围内。
其中,所述支付界面是指用户在通过指纹输入、密码输入及人脸识别等支付过程中电子设备上出现的所有界面。
具体地,所述判断所述间距两端的瞳孔是否聚焦于所述支付界面可首先利用USB摄像头实时采集动态图像信息作为输入,提取视频中每一顿图像信息,利用基于Harr特征的分类识别学习算法检测定位顿图像信息中人脸和人眼位置,提取人眼感兴趣区域,并在该区域上采用改进的SUSAN角点检测算法定位人眼外眼角坐标,同时通过目标跟踪技术实时跟踪人眼瞳孔运动轨迹,利用瞳孔中屯、位置和人眼角点坐标作为视线特征参数构建视线方向计算模型,确定所述间距两端的瞳孔的聚焦区域,进而根据所述瞳孔的聚焦区域判断所述间距两端的瞳孔是否聚焦于所述支付界面。
具体地,所述利用基于Harr特征的分类识别学习算法检测定位顿图像信息中人脸和人眼位置,提取瞳孔的聚焦区域包括如下步骤:
(1)利用Haar分类器对实时图像进行人眼检测;
(2)获取用户所处的位置信息;
(3)利用人眼检测结果和用户所处的位置信息计算用户视觉注意力区域。
所述步骤(1)中,对图像信息进行人眼检测的具体步骤如下:
(1.1)计算人眼部位的矩形特征:
V=R1-R2   (1)
其中,V为矩形特征在该图像区域的计算值,R1和R2分别代表矩形特征中白色区域特征和黑色区域特征,其中Ri的计算公式为:
Ri=Σm,n∈Ri,i={1,2}r(m,n)   (2)
其中,r(m,n)为白色区域或黑色区域中像素点的值,m和n分别代表矩形特征区域的竖坐标和横坐标;
(1.2)采用和面积表来计算矩形特征图的值,其中对于整幅图像中的某一块区域,和面积表的计算方公式为:
SABCD=E(A)+E(C)-E(B)-E(D)   (3)
其中,E(A)、E(B)、E(C)、E(D)分别代表所求区域左上点、右上点、左下点、右下点的和面积表;
(1.3)在眼睛矩形特征的基础上生成10-20个弱分类器,然后使用AdaBoost将其级联为强分类器,具体步骤包括如下:
先从初始训练数据集训练出一个基本分类器;再根据基本分类器的性能对训练样本分部进行调整,增加上一轮迭代时被误分类样本的权重;基于调整后的样本权重训练下一个分类器;重复上述步骤,直到分类器数量达到预先设置的数量。
设训练数据集T={(x1,y1),(x2,y2)…,(xn,yn)},其中,最终强分类器为G(x)。
初始化训练数据集权值分布:
D 1=(W 11,...,W 1i,...,W 1n),W li=1/N,i=1,2,...,N      (4)
对于迭代次数m=1,2,···,M;使用具有权值分布Dm的训练数据集学习,得到基本分类器:
GM(x):χ→{-1,+1}   (5)
计算Gm(x)在训练数据集上的分类误差率:
Figure PCTCN2019103066-appb-000001
计算Gm(x)的系数
Figure PCTCN2019103066-appb-000002
更新训练数据集的权值分布
Dm+1=(Wm+1,1,···,Wm+1,i,···,Wm+1,N)   (8)
W m+1,i=W mi/Z mexp(-a my iG m(x i)),i=1,2,...,N    (9)
Zm是规范化因子
构建基本分类器的线性组合
Figure PCTCN2019103066-appb-000003
得到最终强分类器
Figure PCTCN2019103066-appb-000004
所述步骤(2)中,通过头部支架和摄像头的位置确定用户40所处的空间信息,其具体步骤包括如下:
(2.1)将摄像装置固定于移动终端10的显示屏旁边可以清晰拍摄用户40面部的位置,本实施例中将摄像装置固定于移动终端10的显示屏的上方;
(2.2)用头部将用户40的头部固定好,使得用户40的头部与摄像装置的距离固定,并记录下该距离;
(2.3)在移动终端10的显示屏上随机产生若干个点4;
(2.4)用户40的头部保持不动,只转动眼睛去观察电脑显示屏上随机产生的点,记录每个点的坐标和瞳孔的坐标。
所述步骤(3)中,计算用户40感兴趣区域的具体步骤包括如下:
匹配公式为:
S(x,y)/P(x,y)=S(x,y)/S w*(S r-S l)/P(x,y)-S l     (13)
其中,S(x,y)为显示屏上点的坐标,P(x,y)为瞳孔坐标,Sw为电脑显示屏宽,Sl与Sr分别为用户40观察显示屏时坐标左、右方向极大值;通过公式(13)和步骤(2.4)中所记录的坐标数据计算出瞳孔所对应 的显示屏坐标,即求得S(x,y)的值。在具体求得S(x,y)值时,先通过步骤(2.4)中记录的显示屏坐标S(x,y)与瞳孔P(x,y)坐标,计算出Sl与Sr,得出这两个未知数之后,公式(13)中就只有P(x,y)和S(x,y)两个未知数了,当有一个新的P(x,y)时,就可以计算S(x,y)的值了,即求得所述瞳孔的聚焦区域。
为了更好的根据所述感测范围内所述瞳孔数量与所述瞳孔位置确定注视所述电子设备的支付界面的人数,在一些实施例中,请参阅图4,S30还包括如下步骤:
S33:若所述摄像头的感测范围内的其中一个瞳孔与其余瞳孔的间距均不在所述阈值范围内,判断所述其中一个瞳孔是否聚焦于所述支付界面;若是,则注视所述支付界面的人数增加1人。
具体地,例如,若所述摄像头的感测范围内有3个瞳孔,阈值范围为55mm-65mm,若其中一个瞳孔与其余两个瞳孔的间距分别为40mm和70mm,则可确定所述瞳孔与其余瞳孔的间距40mm和70mm均不在所述阈值范围内55mm-65mm。进一步判断所述瞳孔是否聚焦于所述支付界面;若是,则注视所述支付界面的人数增加1人。
其中,所述支付界面是指用户在通过指纹输入、密码输入及人脸识别等支付过程中电子设备上出现的所有界面。
具体地,所述判断所述间距两端的瞳孔是否聚焦于所述支付界面可首先利用USB摄像头实时采集动态图像信息作为输入,提取视频中每一顿图像信息,利用基于Harr特征的分类识别学习算法检测定位顿图像信息中人脸和人眼位置,提取人眼感兴趣区域,并在该区域上采用改进的SUSAN角点检测算法定位人眼外眼角坐标,同时通过目标跟踪技术实时跟踪人眼瞳孔运动轨迹,利用瞳孔中屯、位置和人眼角点坐标作为视线特征参数构建视线方向计算模型,确定所述间距两端的瞳孔的聚焦区域,进而根据所述瞳孔的聚焦区域判断所述间距两端的瞳孔是否聚焦于所述支付界面。
在一些实施例中,当用户在进行支付时,为了避免陌生人偷窥,更好的保障用户的财产安全,在根据所述支付环境信息,执行相应的安全 操作之后,所述方法还包括:
对所述支付界面的内容进行重排,使得在所述可视区域注视所述支付界面时图像清晰可见,在所述干扰区域注视所述支付界面时出现图像串扰。
其中,所述电子设备通过显示屏显示所述支付界面,所述显示屏外设置有实现指向性光源的膜材,所述膜材将所述感测范围分为可视区域和干扰区域。所述可视区域由所述显示屏向外延伸,所述干扰区域环绕所述可视区域。
具体地,通过更改所述支付界面的像素排布,将进入左眼和右眼的图像指向不同的位置,以使所述支付界面出现图像串扰,从而使得在所述可视区域注视所述支付界面时图像清晰可见,在所述干扰区域注视所述支付界面时图像模糊或不可见。
具体工作原理为:举例说明,若所述显示屏为具有自发光显示单元的显示屏,比如OLED显示屏或者微型发光二极管(Micro-LED)显示屏。
显示屏存在一定的视角,从垂直于显示平面的方向(可视区域)观测电子设备10,亮度较高;但从偏离法线一定角度观测(干扰区域),所述棱镜膜可将所述支付界面的图像进行重新排布,使干扰区域内的所述图像形成一个暗条以及部分区域会产生重影,导致严重的视场损失和畸变,使所述图像在所述干扰区域内不可见。
其中,所述所述膜材为柱状棱镜膜。所述柱状棱镜膜包括第一透光基材以及多个棱镜结构,这些棱镜结构配置于第一透光基材之远离第一扩散层的表面,且这些棱镜结构接收来自投影机的影像光束并导引影像光束沿第一方向传递。第一透光基材的材质可以是聚对苯二甲酸乙二酯(polyethylene terephthalate,PET)或其它透光材质。各棱镜结构内可设置扩散粒子,以扩散影像光束。本实施例的棱镜结构例如为三角柱,这些棱镜结构沿着第二方向彼此平行排列,且各棱镜结构沿着第三方向延伸,亦即各棱镜结构的长轴平行于第三方向。在本实施例中,第一方向、第二方向及第三方向例如是彼此相互垂直。此外,本申请并不限制棱镜结构的具体形状及排列方式,举例来说,棱镜结构例如是以同 心圆状分布的菲涅耳透镜(Fresnel lens)结构。
需要说明的是,在上述各个实施例中,上述各步骤之间并不必然存在一定的先后顺序,本领域普通技术人员,根据本申请实施例的描述可以理解,不同实施例中,上述各步骤可以有不同的执行顺序,亦即,可以并行执行,亦可以交换执行等等。
作为本申请实施例的另一方面,本申请实施例提供一种防止偷窥装置50。请参阅图5,该防止偷窥装置50包括:瞳孔追踪模块51、瞳孔信息提取模块52、人数计算模块53及安全支付模块54。
所述瞳孔追踪模块51用于支付过程中启用摄像头进行瞳孔检测与眼球追踪。
所述瞳孔信息提取模块52用于提取所述摄像头的感测范围内瞳孔数量与瞳孔位置。
所述人数计算模块53用于根据所述感测范围内所述瞳孔数量与所述瞳孔位置确定注视所述电子设备的支付界面的人数。
所述安全支付模块54用于若确定出注视所述支付界面的人数为两人或以上进行提示和/或中止所述支付过程。
所述防止偷窥装置50通过首先在支付过程中启用摄像头进行瞳孔检测与眼球追踪,进而提取所述摄像头的感测范围内瞳孔数量与瞳孔位置,然后根据所述感测范围内所述瞳孔数量与所述瞳孔位置确定注视所述电子设备的支付界面的人数,进而若确定出注视所述支付界面的人数为两人或以上进行提示和/或中止所述支付过程,从而当用户在商场,门店等人员众多,支付环境复杂的环境下,防止陌生人偷窥,提高了支付操作的安全性,使用户的财产安全得到很好的保障。
其中,所述人数计算模块53包括瞳孔间距计算单元及人数判断单元;
所述瞳孔间距计算单元用于若提取出所述摄像头的感测范围内所述瞳孔数量在两个或以上,计算各个瞳孔间的间距。
所述人数判断单元用于若所述间距在阈值范围内,判断所述间距两端的瞳孔是否聚焦所述支付界面;若是,则注视所述支付界面的人数增 加1人。所述人数判断单元还用于若所述摄像头的感测范围内的其中一个瞳孔与其余瞳孔的间距均不在所述阈值范围内,判断所述瞳孔是否聚焦于所述支付界面;若是,则注视所述支付界面的人数增加1人。
在一些实施例中,所述防止偷窥装置50还包括图像重排模块,所述图像重排模块用于对所述支付界面的内容进行重排,使得在所述可视区域注视所述支付界面时图像清晰可见,在所述干扰区域注视所述支付界面时出现图像串扰。
具体地,通过更改所述支付界面的像素排布,将进入左眼和右眼的图像指向不同的位置,以使所述支付界面出现图像串扰。
其中,所述电子设备通过显示屏显示所述支付界面,所述显示屏外设置有实现指向性光源的膜材,所述膜材将所述感测范围分为可视区域和干扰区域。其中,所述可视区域由所述显示屏向外延伸,所述干扰区域环绕所述可视区域。其中,所述膜材为柱状棱镜膜。
需要说明的是,上述防止偷窥装置可执行本申请实施例所提供的支付过程中防止偷窥的方法,具备执行方法相应的功能模块和有益效果。未在防止偷窥装置实施例中详尽描述的技术细节,可参见本申请实施例所提供的支付过程中防止偷窥的方法。
图6为本申请实施例提供的电子设备100的结构框图。该电子设备100可以用于实现所述主控芯片中的全部或者部分功能模块的功能。该电子设备100可以包括:棱镜膜、处理器110、存储器120以及通信模块130。
所述棱镜膜包括第一透光基材以及多个棱镜结构,这些棱镜结构配置于第一透光基材之远离第一扩散层的表面,且这些棱镜结构接收来自投影机的影像光束并导引影像光束沿第一方向传递。第一透光基材的材质可以是聚对苯二甲酸乙二酯(polyethylene terephthalate,PET)或其它透光材质。各棱镜结构内可设置扩散粒子,以扩散影像光束。本实施例的棱镜结构例如为三角柱,这些棱镜结构沿着第二方向彼此平行排列,且各棱镜结构沿着第三方向延伸,亦即各棱镜结构的长轴平行于第三方向。在本实施例中,第一方向、第二方向及第三方向例如是彼此 相互垂直。此外,本申请并不限制棱镜结构的具体形状及排列方式,举例来说,棱镜结构例如是以同心圆状分布的菲涅耳透镜(Fresnel lens)结构。所述处理器110、存储器120以及通信模块130之间通过总线的方式,建立任意两者之间的通信连接。
处理器110可以为任何类型,具备一个或者多个处理核心的处理器110。其可以执行单线程或者多线程的操作,用于解析指令以执行获取数据、执行逻辑运算功能以及下发运算处理结果等操作。
存储器120作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序、非暂态性计算机可执行程序以及模块,如本申请实施例中的支付过程中防止偷窥的方法对应的程序指令/模块(例如,附图5所示的瞳孔追踪模块51、瞳孔信息提取模块52、人数计算模块53及安全支付模块54)。处理器110通过运行存储在存储器120中的非暂态软件程序、指令以及模块,从而执行防止偷窥装置50的各种功能应用以及数据处理,即实现上述任一方法实施例中支付过程中防止偷窥的方法。
存储器120可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据防止偷窥装置50的使用所创建的数据等。此外,存储器120可以包括高速随机存取存储器,还可以包括非暂态存储器,例如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。在一些实施例中,存储器120可选包括相对于处理器110远程设置的存储器,这些远程存储器可以通过网络连接至电子设备100。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述存储器120存储有可被所述至少一个处理器110执行的指令;所述至少一个处理器110用于执行所述指令,以实现上述任意方法实施例中支付过程中防止偷窥的方法,例如,执行以上描述的方法步骤10、20、30、40等等,实现图5中的模块51-54的功能。
通信模块130是用于建立通信连接,提供物理信道的功能模块。通信模块130以是任何类型的无线或者有线通信模块130,包括但不限于WiFi模块或者蓝牙模块等。
进一步地,本申请实施例还提供了一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器110执行,例如,被图6中的一个处理器110执行,可使得上述一个或多个处理器110执行上述任意方法实施例中支付过程中防止偷窥的方法,例如,执行以上描述的方法步骤10、20、30、40等等,实现图5中的模块51-54的功能。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序产品中的计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非暂态计算机可读取存储介质中,该计算机程序包括程序指令,当所述程序指令被相关设备执行时,可使相关设备执行上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
上述产品可执行本申请实施例所提供的支付过程中防止偷窥的方法,具备执行支付过程中防止偷窥的方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的支付过程中防止偷窥的方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人 员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (10)

  1. 一种支付过程中防止偷窥的方法,应用于电子设备,其特征在于,所述方法包括:
    支付过程中启用摄像头进行瞳孔检测与眼球追踪;
    提取所述摄像头的感测范围内瞳孔数量与瞳孔位置;
    根据所述感测范围内所述瞳孔数量与所述瞳孔位置确定注视所述电子设备的支付界面的人数;
    若确定出注视所述支付界面的人数为两人或以上进行提示和/或中止所述支付过程。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述感测范围内所述瞳孔数量与所述瞳孔位置确定注视所述电子设备的支付界面的人数,包括:
    若提取出所述摄像头的感测范围内所述瞳孔数量在两个或以上,计算各个瞳孔间的间距;
    若所述间距在阈值范围内,判断所述间距两端的瞳孔是否聚焦所述支付界面;若是,则注视所述支付界面的人数增加1人。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述感测范围内所述瞳孔数量与所述瞳孔位置确定注视所述电子设备的支付界面的人数,还包括:
    若所述摄像头的感测范围内的其中一个瞳孔与其余瞳孔的间距均不在所述阈值范围内,判断所述其中一个瞳孔是否聚焦于所述支付界面;若是,则注视所述支付界面的人数增加1人。
  4. 根据权利要求1所述的方法,其特征在于,所述电子设备通过显示屏显示所述支付界面,所述显示屏外设置有实现指向性光源的膜材,所述膜材将所述感测范围分为可视区域和干扰区域;所述方法还包括:
    对所述支付界面的内容进行重排,使得在所述可视区域注视所述支付界面时图像清晰可见,在所述干扰区域注视所述支付界面时出现图像串扰。
  5. 根据权利要求4所述的方法,其特征在于,所述在所述干扰区域注视所述支付界面时出现图像串扰,包括:
    更改所述支付界面的像素排布,将进入左眼和右眼的图像指向不同的位置,以使所述支付界面出现图像串扰。
  6. 根据权利要求4所述的方法,其特征在于,所述可视区域由所述显示屏向外延伸,所述干扰区域环绕所述可视区域。
  7. 根据权利要求4所述的方法,其特征在于,所述膜材为柱状棱镜膜。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述中止所述支付过程,包括:
    将所述电子设备上的人脸识别功能、指纹输入功能或密码输入功能作失效处理。
  9. 一种电子设备,其特征在于,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够用于执行如权利要求1-8中任一项所述的支付过程中防止偷窥的方法。
  10. 一种非暂态计算机可读存储介质,其特征在于,所述非暂态计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行如权利要求1-8任一项所述的方法。
PCT/CN2019/103066 2019-08-28 2019-08-28 支付过程中防止偷窥的方法及电子设备 WO2021035575A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/103066 WO2021035575A1 (zh) 2019-08-28 2019-08-28 支付过程中防止偷窥的方法及电子设备
CN201980010281.XA CN111801700B (zh) 2019-08-28 2019-08-28 支付过程中防止偷窥的方法及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/103066 WO2021035575A1 (zh) 2019-08-28 2019-08-28 支付过程中防止偷窥的方法及电子设备

Publications (1)

Publication Number Publication Date
WO2021035575A1 true WO2021035575A1 (zh) 2021-03-04

Family

ID=72805801

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103066 WO2021035575A1 (zh) 2019-08-28 2019-08-28 支付过程中防止偷窥的方法及电子设备

Country Status (2)

Country Link
CN (1) CN111801700B (zh)
WO (1) WO2021035575A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139804A (zh) * 2021-05-11 2021-07-20 支付宝(杭州)信息技术有限公司 一种结算设备
CN114564098A (zh) * 2022-02-22 2022-05-31 广州思涵信息科技有限公司 基于计算机视觉识别技术的电脑屏幕显示控制系统和方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818418A (zh) * 2021-01-20 2021-05-18 深圳市商汤科技有限公司 信息安全管理方法和装置
CN113221699B (zh) * 2021-04-30 2023-09-08 杭州海康威视数字技术股份有限公司 一种提高识别安全性的方法、装置、识别设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010128778A (ja) * 2008-11-27 2010-06-10 Sony Ericsson Mobilecommunications Japan Inc 情報表示装置、情報表示装置の覗き見防止方法及び覗き見防止プログラム
US20150113666A1 (en) * 2013-01-14 2015-04-23 Lookout, Inc. Protecting display of potentially sensitive information
CN105303091A (zh) * 2015-10-23 2016-02-03 广东小天才科技有限公司 一种基于眼球追踪技术的隐私保护方法和系统
CN107194288A (zh) * 2017-04-25 2017-09-22 上海与德科技有限公司 显示屏的控制方法及终端
CN108470131A (zh) * 2018-03-27 2018-08-31 百度在线网络技术(北京)有限公司 用于生成提示信息的方法和装置
CN108595278A (zh) * 2018-04-13 2018-09-28 Oppo广东移动通信有限公司 防偷窥提示方法及相关产品
CN109284640A (zh) * 2018-10-17 2019-01-29 深圳超多维科技有限公司 一种防窥的方法、装置、电子设备及立体显示设备
CN109543473A (zh) * 2018-11-13 2019-03-29 Oppo(重庆)智能科技有限公司 终端的防偷窥方法、装置、终端及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992730A (zh) * 2017-11-28 2018-05-04 宇龙计算机通信科技(深圳)有限公司 一种屏幕信息保护方法和装置
CN108416235B (zh) * 2018-03-30 2019-08-09 百度在线网络技术(北京)有限公司 显示界面防偷窥的方法、装置、存储介质及终端设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010128778A (ja) * 2008-11-27 2010-06-10 Sony Ericsson Mobilecommunications Japan Inc 情報表示装置、情報表示装置の覗き見防止方法及び覗き見防止プログラム
US20150113666A1 (en) * 2013-01-14 2015-04-23 Lookout, Inc. Protecting display of potentially sensitive information
CN105303091A (zh) * 2015-10-23 2016-02-03 广东小天才科技有限公司 一种基于眼球追踪技术的隐私保护方法和系统
CN107194288A (zh) * 2017-04-25 2017-09-22 上海与德科技有限公司 显示屏的控制方法及终端
CN108470131A (zh) * 2018-03-27 2018-08-31 百度在线网络技术(北京)有限公司 用于生成提示信息的方法和装置
CN108595278A (zh) * 2018-04-13 2018-09-28 Oppo广东移动通信有限公司 防偷窥提示方法及相关产品
CN109284640A (zh) * 2018-10-17 2019-01-29 深圳超多维科技有限公司 一种防窥的方法、装置、电子设备及立体显示设备
CN109543473A (zh) * 2018-11-13 2019-03-29 Oppo(重庆)智能科技有限公司 终端的防偷窥方法、装置、终端及存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139804A (zh) * 2021-05-11 2021-07-20 支付宝(杭州)信息技术有限公司 一种结算设备
CN114564098A (zh) * 2022-02-22 2022-05-31 广州思涵信息科技有限公司 基于计算机视觉识别技术的电脑屏幕显示控制系统和方法
CN114564098B (zh) * 2022-02-22 2022-10-25 广州思涵信息科技有限公司 基于计算机视觉识别技术的电脑屏幕显示控制系统和方法

Also Published As

Publication number Publication date
CN111801700A (zh) 2020-10-20
CN111801700B (zh) 2022-04-19

Similar Documents

Publication Publication Date Title
US11546505B2 (en) Touchless photo capture in response to detected hand gestures
US20210407203A1 (en) Augmented reality experiences using speech and text captions
WO2021035575A1 (zh) 支付过程中防止偷窥的方法及电子设备
KR102544062B1 (ko) 가상 이미지 표시 방법, 저장 매체 및 이를 위한 전자 장치
US11520399B2 (en) Interactive augmented reality experiences using positional tracking
US11854147B2 (en) Augmented reality guidance that generates guidance markers
CN106104650A (zh) 经由凝视检测进行远程设备控制
US11689877B2 (en) Immersive augmented reality experiences using spatial audio
US20240144611A1 (en) Augmented reality eyewear with speech bubbles and translation
US11582409B2 (en) Visual-inertial tracking using rolling shutter cameras
US11741679B2 (en) Augmented reality environment enhancement
US11195341B1 (en) Augmented reality eyewear with 3D costumes
EP4172738A1 (en) Augmented reality experiences using social distancing
US20230362573A1 (en) Audio enhanced augmented reality
US20240045494A1 (en) Augmented reality with eyewear triggered iot
US20210406542A1 (en) Augmented reality eyewear with mood sharing
US20220103745A1 (en) Image capture eyewear with context-based sending

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19943292

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19943292

Country of ref document: EP

Kind code of ref document: A1