CN110658907A - Method and device for acquiring user behavior data - Google Patents

Method and device for acquiring user behavior data Download PDF

Info

Publication number
CN110658907A
CN110658907A CN201810688360.3A CN201810688360A CN110658907A CN 110658907 A CN110658907 A CN 110658907A CN 201810688360 A CN201810688360 A CN 201810688360A CN 110658907 A CN110658907 A CN 110658907A
Authority
CN
China
Prior art keywords
user
user interface
attention
sight
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810688360.3A
Other languages
Chinese (zh)
Inventor
孔喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Health Information Technology Ltd
Original Assignee
Alibaba Health Information Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Health Information Technology Ltd filed Critical Alibaba Health Information Technology Ltd
Priority to CN201810688360.3A priority Critical patent/CN110658907A/en
Publication of CN110658907A publication Critical patent/CN110658907A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Physics & Mathematics (AREA)
  • Game Theory and Decision Science (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a method and a device for acquiring user behavior data, which are applied to client equipment with a depth sensing device, wherein the method comprises the following steps: acquiring a focus of a user sight on a user interface by using a depth sensing device on the client equipment; tracking the focus, and acquiring the stay time and focus range of the sight of the user on the user interface; determining visual activity data of the user on the user interface based on the dwell time and the attention range. By the aid of the technical scheme, the visual behavior data of the user can be collected under the condition that the user does not have perception, and accuracy of the obtained user behavior data is improved.

Description

Method and device for acquiring user behavior data
Technical Field
The present application relates to the field of data acquisition technologies, and in particular, to a method and an apparatus for acquiring user behavior data.
Background
In recent years, due to the rapid development of big data, user behavior data is becoming an information asset with high value. Under the condition of obtaining user authorization, behavior data of the user is obtained, and the user behavior is analyzed according to the behavior data to obtain a high-value data result, so that the method has very important significance for guiding the user behavior, proposing a market strategy and the like.
In the prior art, user behavior data is generally acquired by applying a buried point mode. In the process of acquiring user behavior data by using an application point burying mode, events (such as clicking, browsing and the like) needing attention can be set in the application. And then, monitoring events in the running process of the application, and when the events needing attention occur, acquiring data and sending the acquired data to a server. The events to be monitored are usually provided by platforms such as an operating system, a browser, an APP framework, and the like, and may also be custom events that trigger conditions on the basis of the events (e.g., clicking a certain button). Typically, the buried site can be programmed by monitoring the SDK provided by the analysis tool.
However, the user behavior data obtained by the embedded point method has a deviation to some extent, so that the data result obtained by statistics is not accurate enough. Therefore, there is a need in the art for a way to accurately obtain user behavior data.
Disclosure of Invention
The embodiment of the application aims to provide a method and a device for acquiring user behavior data, which can acquire visual behavior data of a user without perception of the user and improve the accuracy of the acquired user behavior data.
Specifically, the method and the device for acquiring the user behavior data are realized as follows:
a method for acquiring user behavior data, which is applied to a client device with a depth sensing device, comprises the following steps:
acquiring a focus of a user sight on a user interface by using a depth sensing device on the client equipment;
tracking the focus, and acquiring the stay time and focus range of the sight of the user on the user interface;
determining visual activity data of the user on the user interface based on the dwell time and the attention range.
An apparatus for obtaining user behavior data, the apparatus comprising a depth sensing device, a display, a processor, and a memory for storing processor-executable instructions, wherein,
the display is used for displaying a user interface;
the depth sensing device is used for determining a focus point of a user sight on a user interface, tracking the focus point and acquiring the stay time and focus range of the user sight on the user interface;
the processor, when executing the instructions, implements: determining visual activity data of the user on the user interface based on the dwell time and the attention range.
A computer readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of any of the above embodiments.
According to the method and the device for acquiring the user behavior data, the focus of the user on the user interface can be acquired by using the hardware condition of the existing client equipment with the depth sensing device, and the behavior data of the sight of the user on the user interface can be acquired through the focus. By using the method provided by the application, the behavior data of the user can be acquired under the condition that the user has no perception, and compared with a method for acquiring data by burying points in the client application in the prior art, the behavior data of the user acquired by using the method provided by the application is closer to the real condition, so that the method has practical reference values for performing data index statistics, layout adjustment of a user interface, optimization of display content and the like on the follow-up utilization of the behavior data of the user.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 3 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 4 is a schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 5 is a flowchart of a method of one embodiment of a method of obtaining user behavior data provided herein;
fig. 6 is a schematic block structure diagram of an embodiment of a device for acquiring user behavior data provided by the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application shall fall within the scope of protection of the present application.
In order to facilitate those skilled in the art to understand the technical solutions provided in the embodiments of the present application, a technical environment for implementing the technical solutions is first described below. The method for acquiring user behavior data by using an application point burying mode in the prior art comprises the following steps. Firstly, setting an event needing attention in an application, monitoring the event triggered by the operation behavior of a user in the process of accessing the application by the user, and issuing corresponding data to a server when the event needing attention occurs. The server can analyze and process the acquired user behavior data, and count the residence time of the user on the concerned page and the frequency of clicking the preset event, so as to obtain data results, such as conversion rate and the like. Finally, the data result can be displayed in a visual data billboard mode and the like, and the data result can be subsequently decoded.
The data result may reflect the actual occurrence of the user behavior, for example, the data result may include the following: "Purchase buttons are clicked 1800 ten thousand times per day on average", "30% of users who stay on the product detail page for 15s-30s have purchased a product", and "users who have visited a product 3 times or more are all in the age group of 20-30 years". However, the prior art method for acquiring user behavior data has a problem of statistical deviation, in order to acquire the data result that "30% of users staying on the product detail page for 15s-30s purchase products", the time of staying on the product detail page by the users needs to be counted, and the prior art method mainly counts the time of the users from opening the page to leaving the page. However, in many cases, the user's focus may not be on the client page, and there is a possibility that the user may put the client device aside to do other things, and the focus is not on the client page. That is, similar to the data results such as the stay time of the user on the product detail page described above, there is a possibility that the data results are high.
Based on the technical requirements similar to those described above, the application provides a method for acquiring user behavior data, and the method can acquire more accurate and finer-grained user behavior data based on the focus of the user sight on the client display, so as to obtain more accurate and reliable data results through statistics.
The method provided by the embodiment of the present application is described below through a specific application scenario, as shown in fig. 1, a user is using a mobile phone browsing application, and when the user browses a mobile phone application a, the mobile phone application a may invoke a binocular camera on a client device to acquire a point of interest of a user's sight line on a mobile phone display interface. In a specific implementation process, a space coordinate system may be established based on a plane of the mobile phone, as shown in fig. 1, a direction parallel to the plane of the mobile phone and pointing to the right hand of the user may be established as an X-axis direction, a direction parallel to the plane of the mobile phone and perpendicular to the X-axis is a Y-axis direction, and a direction perpendicular to the plane of the mobile phone and facing outward is a Z-axis. Based on the established space coordinate system, the positions of the binocular pupils of the user and the binocular gazing directions can be obtained. As shown in fig. 2, a triangular plane of a connection line between the two eyes and the pupils is constructed, an included angle between the triangular plane and the Z axis is determined, and then, a focus point of the user's sight line on the mobile phone screen can be calculated and obtained based on the positions of the pupils and the included angle. As shown in fig. 3, the attention point of the user's sight on the mobile phone screen can be calculated as a gray circle point in the figure. And then, tracking the tracking point, acquiring the stay time and the attention range of the sight of the user on a mobile phone display interface, and determining the visual activity data of the user on the user interface based on the stay time and the attention range.
In another scenario, after the visual activity data of the user sight line on the mobile phone display interface is acquired, the visual activity data may be counted in combination with the content displayed on the mobile phone display interface. In one example, the dwell time, i.e., the attention area, may be used to count the total dwell time of the user's gaze at various locations on the user interface. Based on the total stay time, the attention degree of each user in each area on the user interface can be obtained, for example, some users are used to pay attention to the upper part of the mobile phone screen, and some users are used to pay attention to the lower right corner of the mobile phone screen. A reminder message or the like, such as an advertisement push message or the like, may be set at a location preferred by the user based on the user's attention to various areas on the user interface.
In another example, a user is browsing a shopping application of an e-commerce platform, as shown in fig. 4, the product purchase page of the shopping application is divided into a total of four main areas, including a product picture display area, a product title display area, a product detail introduction area, and a product review area. According to the activity data of the user in the shopping application within one month, the browsing times of the sight of the user in each area are obtained through statistics, and in the scene, the time duration of the sight of the user staying in the area exceeds three seconds can be set to be one-time browsing. As shown in fig. 4, according to the statistics of the activity data, the user browses the product picture display area 3700 times, the product title display area 1000 times, the product detail introduction area 900 times, and the product comment area 1800 times. In addition, according to the number of browsing times, the attention degree of the user to each area at the position marked by the light and dark colors can be used, and the darker the color is, the more the user pays attention to the display content of the area. Subsequently, the displayed content of each region can be adjusted by using the statistical data, so as to obtain the interest points, preferences, and the like of the user.
The following describes the method for acquiring user behavior data in detail with reference to the accompanying drawings. Fig. 5 is a flowchart of a method of an embodiment of a method for obtaining user behavior data provided in the present application. Although the present application provides method steps as shown in the following examples or figures, more or fewer steps may be included in the method based on conventional or non-inventive efforts. In the case of steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application. The method may be executed sequentially or in parallel (for example, in the context of a parallel processor or a multi-thread process) in the method shown in the embodiment or the figures in the process of acquiring the user behavior data in practice.
Specifically, as shown in fig. 5, an embodiment of the method for obtaining user behavior data provided by the present application is applied to a client device having a depth sensing device, where the method may include:
s501: and acquiring the focus of the sight of the user on the user interface by utilizing the depth sensing device on the client equipment.
S503: and tracking the focus, and acquiring the stay time and the focus range of the sight of the user on the user interface.
S505: determining visual activity data of the user on the user interface based on the dwell time and the attention range.
The method for acquiring the user behavior data can be applied to client equipment with a depth sensing device, and the depth sensing device can be used for determining the position of the pupil of the user and measuring the distance between the pupil and the client equipment. In this embodiment, the depth sensing device may include at least one of: a binocular camera, a dual infrared sensor, a single camera and infrared sensor, a Time of Flight (Time of Flight) sensor. Typically, the client device may include a mobile phone with dual cameras, a tablet computer, and the like. When the user faces the display screen of the client device, the depth sensing device on the client device can acquire the activity state of the pupil of the user. In this embodiment, the depth sensing device may be fixedly installed on the client device, for example, integrated in the client device, or may be connected to the client device through a component such as a shaft wheel. In this embodiment, the depth sensing device on the client device may be used to obtain a focus point of the user's sight on the user interface. And tracking the focus to acquire the behavior data of the user sight on the user interface.
In one embodiment of the application, in the process of acquiring the attention point of the user sight line on the user interface by using the depth sensing device, a space coordinate system can be established based on the client device. In a specific example, based on a three-dimensional coordinate system of a plane in which a display screen of the client device is located, for example, a direction parallel to the display screen and pointing to the right hand of the user may be set as a positive x-axis direction, a direction parallel to the display screen, perpendicular to the x-axis, and pointing below the display screen may be set as a positive y-axis direction, and a direction perpendicular to the xoy plane and pointing to the outside from the display screen may be set as a positive z-axis direction. Of course, in other embodiments, the spatial coordinate system may also include other kinds of coordinate systems, such as a polar coordinate system, a spherical coordinate system, a cylindrical coordinate system, and the like, and the application is not limited to the kind of the established coordinate system.
In this embodiment, after the spatial coordinate system is established based on the client device, the position of the pupil of the user in the spatial coordinate system and the angle between the line of sight of the user and the user interface may also be obtained. In one embodiment, the position of the pupil of the user can be located by using the depth sensing device, and the position of the pupil of the user in the spatial coordinate system and the angle between the sight line of the user and the user interface can also be obtained by means of ranging. In this embodiment, the position of the pupil of the user in the spatial coordinate system and the angle between the user sight line and the user interface are obtained, an intersection point between the user sight line and the user interface may also be calculated according to the position and the angle, and the intersection point is used as a focus point of the user sight line on the user interface. In one specific example, the focus of the user's gaze on the user interface may be settled according to the following three steps:
1. measuring the spatial positions of the two eyes by using the depth sensing device, and taking the average value of the positions of the two eyes as a central point, wherein the distance from the central point to the plane of the display screen of the client is d;
2. according to image analysis, the depth sensing device respectively calculates the watching directions of the two eyes, a triangular plane is constructed by the watching directions of the two eyes and a connecting line between pupils, and the included angle between the triangular plane and the direction perpendicular to the client display screen by the user is theta;
3. and calculating the focus of the user sight on the user interface through d and theta.
It should be noted that the spatial position and angle of the pupil can be acquired by any one of a binocular camera device, a dual infrared sensor, a single camera device, an infrared sensor and a time-of-flight sensor. For the binocular camera device, the spatial position and angle of the pupil can be calculated by the position difference of the two camera devices. For a dual infrared sensor or a single camera and an infrared sensor, one of the infrared sensors may be used to project a plurality of infrared light spots onto the face of the user, and the other infrared sensor or the single camera may be used to obtain the depth of the infrared light spots reflected back from the face. The flight time sensor can emit modulated near-infrared light to a human face and reflect the modulated near-infrared light after meeting the pupils of a user, and the sensor can calculate the distance between human face organs by calculating the time difference or phase difference between light emission and reflection so as to generate depth information, such as the spatial position and angle of the pupils. Of course, in the embodiment of the application, the attention point of the user to the user interface can be acquired without wearing other auxiliary devices such as eye tracking glasses based on other existing depth sensing devices of the client device, that is, without the perception of the user.
In this embodiment, after the focus point of the user sight on the user interface is obtained, the focus point may be tracked, the stay time and the focus range of the user sight on the user interface are obtained, and the visual activity data of the user on the user interface is determined based on the stay time and the focus range. In one embodiment of the present application, the visual activity data may include at least one of:
the stay time of the user sight on the user interface exceeds the attention range of preset time;
and the time length of the sight of the user staying on the user interface exceeds the information corresponding to the attention range with the preset time length.
In this embodiment, the stay time of the user's sight line on the user interface may reflect the attention degree of the user to a certain user interface, and the longer the stay time, the higher the attention degree of the user to the user interface. In one example, after a user opens a certain application, the sight line is always stopped on the user interface of the application, and the attention degree of the user on the application can be determined by counting the stopping time of the sight line of the user on the user interface of the application. The attention range of the user sight line on the user interface can accurately reflect the attention degree of the local area of the user sight line on the user interface. In one example, a user often only focuses on a part of the area in the user interface during the process of actually browsing the user interface, for example, when browsing information of a product on a client, the user may focus on a picture of the product and a price. In this embodiment, the attention range of the user's sight line on the user interface may be acquired, that is, in the above example, the picture and the price of the product that the user relatively pays attention to may be determined according to the acquired attention range of the user on the user interface. In this embodiment, an attention range in which the stay time of the sight of the user on the user interface exceeds a preset time may also be obtained, so as to further obtain an area that the user focuses attention on. For example, in a product display page, video information about a product may be played, and if a user watches the video information about the product for a long time, the attention degree of the user to the video information about the product may be captured by the method of this embodiment. In addition, in another embodiment, information corresponding to an attention range in which the stay time of the user sight on the user interface exceeds a preset time may be further obtained, so that when the attention range of the user sight on the user interface is obtained, information in the attention range may be further obtained, for example, what is specific content in the attention range, in the above example, if the user pays attention to a certain area of the user interface for a long time, the content in the area may be, for example, in a display page of a commodity, user evaluation information of the commodity and the like, and here, specific evaluation information that the user pays attention to may be obtained according to the sight activity of the user.
In one embodiment of the present application, a focus range, such as a video display area of the product, a comment area, a detail description area of the product, and the like in the product display page, may also be set in advance. After the preset attention range is set, the stay time of the user sight of each user in the preset attention range can be acquired. Of course, it may be determined that the longer the user's gaze stays, the more the user focuses on the information within the preset focus range. Subsequently, the attention duration of the plurality of users to the preset attention range can be used, and the importance of the content of the preset attention range can be shown when the attention duration of the plurality of users to a certain preset attention range is longer, so that for an operator, the editing effect on the content of the preset attention range can be enhanced, and the interestingness of the content of the preset attention range is improved.
In this embodiment, the visual activity data may include the user's attention to a part of the preset attention range, the user's attention to the entire user interface, the user's attention to different applications installed in the client device, and the like, as described above. Subsequently, after the attention degree of the user to the user interface is obtained through statistics, the display content, the display layout and the like of the application can be adjusted according to the visual activity data, for example, the display content of the area which is interested by the user is increased, and the area or the plate which is not interested by the user is removed.
In an embodiment of the present application, the duration and the attention range may be statistically analyzed to obtain the attention degree of the user to the user interface. In an embodiment, the total dwell time at each position on the user interface may be obtained by performing statistical processing on the dwell time and the attention range, respectively. For example, the user preferences may differ from user to user at different locations on the user interface, some user preferences may focus on locations above the user interface, some user preferences may focus on locations in the lower right hand corner of the user interface, and so forth. In this embodiment, the total staying time of the user at each position on the user interface is obtained through statistics, so that the preference degree of the user for each position on the user interface can be reflected. In addition, marks matched with the total stay time length can be arranged on various positions of the user interface, and the marks can comprise thermodynamic diagrams, color shades and the like.
In one embodiment of the present application, images of the user interface that the user has focused on may also be recorded. Specifically, the focus may be tracked, and an image of a user interface that is focused by the user's gaze and a focus area of the user's gaze on the user interface may be captured. In a specific implementation, when a user looks at a user interface, the client device can be controlled to capture an image of the user interface concerned by the user and a focus range of the user's sight on the user interface. In a specific example, when a user browses a goods presentation interface presented by a client device and the user looks into contact with a display screen, an image of the user interface currently presented by the client device may be captured, and a focus range of the user's sight on the user interface may be counted, where the focus range may include a goods picture presentation area, a user evaluation area, a goods title area, a goods introduction area, and the like in the goods presentation interface.
In one embodiment of the application, after capturing an image of a user interface to be focused on by the sight line of a user and a focus range of the sight line of the user on the user interface, the captured image of the user interface with the similarity degree larger than a preset threshold value can be subjected to merging processing. In this embodiment, the similar images may include similar layouts between the images, such as a product display interface of a certain e-commerce platform, and the product display interface generally has a relatively similar layout structure, and based on this, the images of the user interfaces with the similarity degree greater than a preset threshold may be merged, where the preset threshold may include, for example, 80%, 87%, 95%, and so on. In this embodiment, the merging processing manner may include retaining a layout frame of the user interface. After the images of the similar user interfaces are merged, the attention range on the images may be statistically processed to obtain the attention degree of each region on the merged images. The attention degree may include the total number of times that each region on the combined image is focused by the user, for example, in a product display interface, according to data statistics of one month, the total number of times of focusing of a product picture display region is 3700 times, the total number of times of focusing of a user evaluation region is 1800 times, and the total number of times of focusing of a product introduction region is 900 times. After the attention degrees of the respective regions on the merged image are acquired, the attention degrees of the respective regions may be marked on the merged image. In one embodiment, the attention levels of the respective regions may be marked with shades of color. In another embodiment, the degree of attention of the respective region may be marked with degree of attention data. In this embodiment, after the attention degree of each region is marked, the interest point of the user on the user interface can be quickly acquired, and the display content or the display mode of the user interface is adjusted according to the interest point, so that the guidance effect of the user interface on the user is enhanced, and the use experience of the user is enhanced.
According to the method for acquiring the user behavior data, the focus point of the user on the user interface can be acquired by using the hardware condition of the existing client equipment with the depth sensing device, and the behavior data of the sight of the user on the user interface can be acquired through the focus point. By using the method provided by the application, the behavior data of the user can be acquired under the condition that the user has no perception, and compared with a method for acquiring data by burying points in the client application in the prior art, the behavior data of the user acquired by using the method provided by the application is closer to the real condition, so that the method has practical reference values for performing data index statistics, layout adjustment of a user interface, optimization of display content and the like on the follow-up utilization of the behavior data of the user.
As shown in fig. 6, in another aspect, the present application further provides an apparatus for acquiring user behavior data, fig. 6 is a schematic block diagram of an embodiment of the apparatus for acquiring user behavior data provided in the present application, where the apparatus includes a depth sensing device, a display, a processor, and a memory for storing executable instructions of the processor, where,
the display is used for displaying a user interface;
the depth sensing device is used for determining a focus point of a user sight on a user interface, tracking the focus point and acquiring the stay time and focus range of the user sight on the user interface;
the processor, when executing the instructions, implements: determining visual activity data of the user on the user interface based on the dwell time and the attention range.
Optionally, in an embodiment of the present application, the visual activity data may include at least one of:
the stay time of the user sight on the user interface exceeds the attention range of preset time;
and the time length of the sight of the user staying on the user interface exceeds the information corresponding to the attention range with the preset time length.
Optionally, in an embodiment of the application, the processor, when implementing the step of determining the visual activity data of the user on the user interface based on the dwell time and the attention range, includes:
determining that the focus point falls into a preset focus range in the user interface;
and acquiring the stay time of the sight of the user in the preset attention range.
Optionally, in an embodiment of the present application, the depth sensing device includes at least one of: a binocular camera, a dual infrared sensor, a single camera and infrared sensor, a Time of Flight (Time of Flight) sensor.
Optionally, in an embodiment of the present application, the depth sensing device when determining the point of interest of the user's sight line on the user interface includes:
establishing a spatial coordinate system based on the device;
acquiring the position of the pupil of the user in the space coordinate system and the angle between the sight line of the user and the user interface by using a depth sensing device on the equipment;
and calculating an intersection point between the user sight line and the user interface according to the position and the angle, and taking the intersection point as a focus point of the user sight line on the user interface.
Optionally, in an embodiment of the application, the performing, by the processor, a statistical analysis on the dwell time and the attention range to obtain the attention degree of the user to the user interface includes:
respectively carrying out statistical processing on the stay time and the attention range to obtain the total stay time of each position on the user interface;
and setting marks matched with the total stay time length at various positions of the user interface.
Optionally, in an embodiment of the application, the processor, when implementing the step of determining the visual activity data of the user on the user interface based on the dwell time and the attention range, includes:
and tracking the attention point, and capturing an image of a user interface which is concerned by the sight line of the user and an attention range of the sight line of the user on the user interface.
Optionally, in an embodiment of the present application, after the tracking the focus point and capturing an image of the user interface focused by the line of sight of the user, the processor further includes:
merging the captured images of the user interfaces with the similarity degrees larger than a preset threshold value;
performing statistical processing on the attention range on the image to acquire the attention degree of each region on the combined image;
marking the attention degrees of the respective regions on the merged image.
In another aspect, the present application further provides an apparatus for acquiring user behavior data, including a processor and a memory for storing processor-executable instructions, where the processor executes the instructions to implement the steps of the method according to any of the above embodiments.
In another aspect, the present application further provides a computer-readable storage medium, on which computer instructions are stored, and the instructions, when executed, implement the steps of the method according to any of the above embodiments.
The computer readable storage medium may include physical means for storing information, typically by digitizing the information for storage on a medium using electrical, magnetic or optical means. The computer-readable storage medium according to this embodiment may include: devices that store information using electrical energy, such as various types of memory, e.g., RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, and usb disks; devices that store information optically, such as CDs or DVDs. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate a dedicated integrated circuit chip 2. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most popular applications. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
Although the present application has been described in terms of embodiments, those of ordinary skill in the art will recognize that there are numerous variations and permutations of the present application without departing from the spirit of the application, and it is intended that the appended claims encompass such variations and permutations without departing from the spirit of the application.

Claims (20)

1. A method for acquiring user behavior data, which is applied to a client device with a depth sensing device, the method comprises:
acquiring a focus of a user sight on a user interface by using a depth sensing device on the client equipment;
tracking the focus, and acquiring the stay time and focus range of the sight of the user on the user interface;
determining visual activity data of the user on the user interface based on the dwell time and the attention range.
2. The method of claim 1, wherein the visual activity data comprises at least one of:
the stay time of the user sight on the user interface exceeds the attention range of preset time;
and the time length of the sight of the user staying on the user interface exceeds the information corresponding to the attention range with the preset time length.
3. The method of claim 1, wherein the determining visual activity data of the user on the user interface based on the dwell time and the focus area comprises:
determining that the focus point falls into a preset focus range in the user interface;
and acquiring the stay time of the sight of the user in the preset attention range.
4. The method of claim 1, wherein the depth sensing device comprises at least one of: a binocular camera, a dual infrared sensor, a single camera and infrared sensor, a Time of Flight (Time of Flight) sensor.
5. The method of claim 4, wherein the obtaining a point of interest of a user's gaze on a user interface with a depth sensing device on the client device comprises:
establishing a spatial coordinate system based on the client device;
acquiring the position of the pupil of the user in the space coordinate system and the angle between the sight of the user and the user interface by using a depth sensing device on the client equipment;
and calculating an intersection point between the user sight line and the user interface according to the position and the angle, and taking the intersection point as a focus point of the user sight line on the user interface.
6. The method of claim 1, wherein the determining visual activity data of the user on the user interface based on the dwell time and the focus area comprises:
and carrying out statistical analysis on the stay time and the attention range to obtain the attention degree of the user to the user interface.
7. The method according to claim 6, wherein the performing a statistical analysis on the dwell time and the attention range to obtain the attention degree of the user to the user interface comprises:
respectively carrying out statistical processing on the stay time and the attention range to obtain the total stay time of each position on the user interface;
and setting marks matched with the total stay time length at various positions of the user interface.
8. The method of claim 1, wherein the determining visual activity data of the user on the user interface based on the dwell time and the focus area comprises:
and tracking the attention point, and capturing an image of a user interface which is concerned by the sight line of the user and an attention range of the sight line of the user on the user interface.
9. The method of claim 8, wherein after said tracking said point of interest, capturing an image of a user interface that is of interest to said user's gaze, said method further comprises:
merging the captured images of the user interfaces with the similarity degrees larger than a preset threshold value;
performing statistical processing on the attention range on the image to acquire the attention degree of each region on the combined image;
marking the attention degrees of the respective regions on the merged image.
10. An apparatus for obtaining user behavior data, the apparatus comprising a depth sensing device, a display, a processor, and a memory for storing processor-executable instructions, wherein,
the display is used for displaying a user interface;
the depth sensing device is used for determining a focus point of a user sight on a user interface, tracking the focus point and acquiring the stay time and focus range of the user sight on the user interface;
the processor, when executing the instructions, implements: determining visual activity data of the user on the user interface based on the dwell time and the attention range.
11. The device of claim 10, wherein the visual activity data comprises at least one of:
the stay time of the user sight on the user interface exceeds the attention range of preset time;
and the time length of the sight of the user staying on the user interface exceeds the information corresponding to the attention range with the preset time length.
12. The apparatus of claim 10, wherein the processor, when implementing the steps, in determining visual activity data of the user on the user interface based on the dwell time and the focus area, comprises:
determining that the focus point falls into a preset focus range in the user interface;
and acquiring the stay time of the sight of the user in the preset attention range.
13. The apparatus of claim 10, wherein the depth sensing device comprises at least one of: a binocular camera, a dual infrared sensor, a single camera and infrared sensor, a Time of Flight (Time of Flight) sensor.
14. The apparatus of claim 13, wherein the depth sensing device, in determining the point of interest of the user's gaze on the user interface, comprises:
establishing a spatial coordinate system based on the device;
acquiring the position of the pupil of the user in the space coordinate system and the angle between the sight line of the user and the user interface by using a depth sensing device on the equipment;
and calculating an intersection point between the user sight line and the user interface according to the position and the angle, and taking the intersection point as a focus point of the user sight line on the user interface.
15. The apparatus of claim 10, wherein the processor, when implementing the steps, in determining visual activity data of the user on the user interface based on the dwell time and the focus area, comprises:
and carrying out statistical analysis on the stay time and the attention range to obtain the attention degree of the user to the user interface.
16. The apparatus of claim 15, wherein the processor, when performing the step of performing statistical analysis on the dwell time and the attention range to obtain the attention degree of the user to the user interface, comprises:
respectively carrying out statistical processing on the stay time and the attention range to obtain the total stay time of each position on the user interface;
and setting marks matched with the total stay time length at various positions of the user interface.
17. The apparatus of claim 10, wherein the processor, when implementing the steps, in determining visual activity data of the user on the user interface based on the dwell time and the focus area, comprises:
and tracking the attention point, and capturing an image of a user interface which is concerned by the sight line of the user and an attention range of the sight line of the user on the user interface.
18. The device of claim 17, wherein the processor, after performing the steps of tracking the point of interest, capturing an image of a user interface that is of interest to the user's gaze, further comprises:
merging the captured images of the user interfaces with the similarity degrees larger than a preset threshold value;
performing statistical processing on the attention range on the image to acquire the attention degree of each region on the combined image;
marking the attention degrees of the respective regions on the merged image.
19. An apparatus for obtaining user behavior data, comprising a processor and a memory for storing processor-executable instructions, the processor implementing the steps of the method of any one of claims 1 to 9 when executing the instructions.
20. A computer-readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of any one of claims 1 to 9.
CN201810688360.3A 2018-06-28 2018-06-28 Method and device for acquiring user behavior data Pending CN110658907A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810688360.3A CN110658907A (en) 2018-06-28 2018-06-28 Method and device for acquiring user behavior data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810688360.3A CN110658907A (en) 2018-06-28 2018-06-28 Method and device for acquiring user behavior data

Publications (1)

Publication Number Publication Date
CN110658907A true CN110658907A (en) 2020-01-07

Family

ID=69026360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810688360.3A Pending CN110658907A (en) 2018-06-28 2018-06-28 Method and device for acquiring user behavior data

Country Status (1)

Country Link
CN (1) CN110658907A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704292A (en) * 2019-10-15 2020-01-17 中国人民解放军海军大连舰艇学院 Evaluation method for display control interface design
CN111798457A (en) * 2020-06-10 2020-10-20 上海众言网络科技有限公司 Image visual weight determining method and device and image evaluation method
CN111966280A (en) * 2020-08-19 2020-11-20 浙江百应科技有限公司 Method and device for drawing thermodynamic diagram based on user sliding gesture at terminal
CN113129801A (en) * 2021-04-14 2021-07-16 Oppo广东移动通信有限公司 Control method and device, mobile terminal and storage medium
CN113126877A (en) * 2021-05-18 2021-07-16 中国银行股份有限公司 Interface use condition analysis method and device
CN116137003A (en) * 2023-04-04 2023-05-19 深圳柯赛标识智能科技有限公司 Intelligent advertisement terminal based on big data and advertisement publishing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598257A (en) * 2016-12-23 2017-04-26 北京奇虎科技有限公司 Mobile terminal-based reading control method and apparatus
CN107957775A (en) * 2016-10-18 2018-04-24 阿里巴巴集团控股有限公司 Data object exchange method and device in virtual reality space environment
CN107957779A (en) * 2017-11-27 2018-04-24 海尔优家智能科技(北京)有限公司 A kind of method and device searched for using eye motion control information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107957775A (en) * 2016-10-18 2018-04-24 阿里巴巴集团控股有限公司 Data object exchange method and device in virtual reality space environment
CN106598257A (en) * 2016-12-23 2017-04-26 北京奇虎科技有限公司 Mobile terminal-based reading control method and apparatus
CN107957779A (en) * 2017-11-27 2018-04-24 海尔优家智能科技(北京)有限公司 A kind of method and device searched for using eye motion control information

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704292A (en) * 2019-10-15 2020-01-17 中国人民解放军海军大连舰艇学院 Evaluation method for display control interface design
CN110704292B (en) * 2019-10-15 2020-11-03 中国人民解放军海军大连舰艇学院 Evaluation method for display control interface design
CN111798457A (en) * 2020-06-10 2020-10-20 上海众言网络科技有限公司 Image visual weight determining method and device and image evaluation method
CN111798457B (en) * 2020-06-10 2021-04-06 上海众言网络科技有限公司 Image visual weight determining method and device and image evaluation method
CN111966280A (en) * 2020-08-19 2020-11-20 浙江百应科技有限公司 Method and device for drawing thermodynamic diagram based on user sliding gesture at terminal
CN113129801A (en) * 2021-04-14 2021-07-16 Oppo广东移动通信有限公司 Control method and device, mobile terminal and storage medium
CN113126877A (en) * 2021-05-18 2021-07-16 中国银行股份有限公司 Interface use condition analysis method and device
CN113126877B (en) * 2021-05-18 2022-07-05 中国银行股份有限公司 Interface use condition analysis method and device
CN116137003A (en) * 2023-04-04 2023-05-19 深圳柯赛标识智能科技有限公司 Intelligent advertisement terminal based on big data and advertisement publishing method

Similar Documents

Publication Publication Date Title
CN110658907A (en) Method and device for acquiring user behavior data
CA3151944C (en) Virtual fitting systems and methods for spectacles
US11650659B2 (en) User input processing with eye tracking
JP6681342B2 (en) Behavioral event measurement system and related method
CN102087582B (en) Automatic scrolling method and device
KR102092931B1 (en) Method for eye-tracking and user terminal for executing the same
CN107666987A (en) Robotic process automates
Toyama et al. A mixed reality head-mounted text translation system using eye gaze input
JP2019512793A (en) Head mounted display system configured to exchange biometric information
CN104685449A (en) User interface element focus based on user's gaze
Kato et al. DejaVu: integrated support for developing interactive camera-based programs
DE112013004801T5 (en) Multimodal touch screen emulator
CN108958577B (en) Window operation method and device based on wearable device, wearable device and medium
US20180210546A1 (en) Pose-invariant eye-gaze tracking using a single commodity camera
KR20190067433A (en) Method for providing text-reading based reward advertisement service and user terminal for executing the same
CN107220230A (en) A kind of information collecting method and device, and a kind of intelligent terminal
US11227307B2 (en) Media content tracking of users' gazing at screens
CN109543563A (en) Security prompt method, device, storage medium and electronic equipment
CN111723758A (en) Video information processing method and device, electronic equipment and storage medium
Shahid et al. Eye-gaze and augmented reality framework for driver assistance
Othman et al. CrowdEyes: Crowdsourcing for robust real-world mobile eye tracking
TW201709022A (en) Non-contact control system and method
Asghari et al. Can eye tracking with pervasive webcams replace dedicated eye trackers? an experimental comparison of eye-tracking performance
EP4167199A1 (en) Method and system for tracking and quantifying visual attention on a computing device
US11250242B2 (en) Eye tracking method and user terminal performing same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200107