CN112613364A - Target object determination method, target object determination system, storage medium, and electronic device - Google Patents

Target object determination method, target object determination system, storage medium, and electronic device Download PDF

Info

Publication number
CN112613364A
CN112613364A CN202011458215.XA CN202011458215A CN112613364A CN 112613364 A CN112613364 A CN 112613364A CN 202011458215 A CN202011458215 A CN 202011458215A CN 112613364 A CN112613364 A CN 112613364A
Authority
CN
China
Prior art keywords
user
real
response signal
physiological response
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011458215.XA
Other languages
Chinese (zh)
Inventor
崔承坤
王晨
董旭
安子骥
雷正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinhua Net Co ltd
Original Assignee
Xinhua Net Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinhua Net Co ltd filed Critical Xinhua Net Co ltd
Priority to CN202011458215.XA priority Critical patent/CN112613364A/en
Publication of CN112613364A publication Critical patent/CN112613364A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Abstract

The invention provides a method, a system, a storage medium and an electronic device for determining a target object, wherein the method comprises the following steps: acquiring a viewpoint position of a user for an image frame, and determining a candidate object corresponding to the viewpoint position in the image frame; acquiring a real-time physiological response signal of a user in an image frame display process; determining emotion information of the user according to the real-time physiological response signal; and when the emotion information is preset emotion information, determining that the candidate object is a target object interested by the user. Therefore, the interested object of the user is determined together by combining the real-time physiological reaction signal and the gazing sight line information of the user, and the scene requirement based on the interested object of the user is met.

Description

Target object determination method, target object determination system, storage medium, and electronic device
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a method, a system, a storage medium, and an electronic device for determining a target object.
Background
At present, under various application scenes such as advertisement, movie and television, news and the like, the demand of knowing objects which are interested by users is met, for example, in the field of advertisement, targeted marketing is carried out on the problems which are interested by the users, and the purchase conversion rate can be obviously improved.
In the related art, the object which the user is interested in is obtained by inquiring the user questionnaire, so that the obtaining efficiency is low, and if the questionnaire is false, the reality of obtaining the problem which the user is interested in is not high.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first object of the present invention is to provide a method for determining a target object, so as to jointly determine an object of interest of a user by combining a real-time physiological response signal of the user and gaze information of the user, and meet a scene requirement based on the object of interest of the user.
A second object of the invention is to propose a system for determining a target object.
A third object of the invention is to propose a non-transitory computer-readable storage medium.
A fourth object of the invention is to propose an electronic device.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides a method for determining a target object, including the following steps:
acquiring a viewpoint position of a user for an image frame, and determining a candidate object corresponding to the viewpoint position in the image frame;
acquiring a real-time physiological response signal of the user in the image frame display process;
determining emotion information of the user according to the real-time physiological response signal;
and when the emotion information is preset emotion information, determining that the candidate object is the target object interested by the user.
To achieve the above object, a second embodiment of the present invention provides a target object determination system, including: a physiological response signal acquisition device and a processor, wherein the physiological response signal acquisition device is connected with the processor, wherein,
the physiological response signal acquisition equipment is used for acquiring the viewpoint position of the user aiming at the image frame;
the processor to determine a candidate object corresponding to the viewpoint position in the image frame;
the physiological response signal acquisition equipment is also used for acquiring a real-time physiological response signal of the user in the image frame display process;
the processor is further configured to determine emotion information of the user according to the real-time physiological response signal, and when the emotion information is preset emotion information, determine that the candidate object is a target object interested by the user.
In order to achieve the above object, a third aspect of the present invention provides a non-transitory computer-readable storage medium, wherein instructions of the storage medium, when executed by a processor, enable execution of the method for determining a target object described in the first aspect of the present invention.
In order to achieve the above object, a fourth aspect of the present invention provides an electronic device, which when executed by an instruction processor in the electronic device, enables the determination method of the target object described in the first aspect of the present invention to be performed.
The embodiment of the invention at least comprises the following beneficial technical effects:
the method comprises the steps of obtaining a viewpoint position of a user for an image frame, determining a candidate object corresponding to the viewpoint position in the image frame, further obtaining a real-time physiological response signal of the user in the image frame display process, determining emotion information of the user according to the real-time physiological response signal, and finally determining the candidate object as a target object interested by the user when the emotion information is preset emotion information. Therefore, the interested object of the user is determined together by combining the real-time physiological reaction signal and the gazing sight line information of the user, and the scene requirement based on the interested object of the user is met.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a method for determining a target object according to an embodiment of the present invention;
fig. 2 is a schematic view of a determined scene of a target object according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of another method for determining a target object according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of another method for determining a target object according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of a method for determining a target object according to an embodiment of the present invention;
FIG. 6 is a schematic view of another determined scene of a target object according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a target object determination system according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
A determination method, a system, a storage medium, and an electronic apparatus of a target object of the embodiments of the present invention are described below with reference to the drawings.
Based on the above analysis, in the related art, the manner of acquiring the object of interest of the user depends on the questionnaire survey, and cannot be guaranteed in terms of acquisition efficiency or accuracy of the acquired result.
In order to solve the technical problem, the invention provides a mode of determining the object of interest of the user according to the viewpoint of the user and the real-time physiological response signal, so that the efficiency and the accuracy of obtaining the object of interest of the user are improved.
Specifically, fig. 1 is a flowchart of a target object determination method according to an embodiment of the present invention, as shown in fig. 1, the method including:
step 101, acquiring a viewpoint position of a user with respect to an image frame, and determining a candidate object corresponding to the viewpoint position in the image frame.
It should be noted that the image frames in this embodiment may be individual images, such as photos, or may be video frames in a video stream. When the video frame is a video frame in the video stream, the video stream corresponding to the current scene can be shot by the camera so as to obtain an object watched by the user in the current scene, and subsequent operation is facilitated.
For example, when the application scene is a driving scene, as shown in fig. 2, an image frame corresponding to an environment video in a vehicle may be captured, image features of the image frame may be identified, the image features may be input into a pre-constructed deep learning model, an object type included in the image frame output by the model may be acquired, it is detected that the object type included in the image frame in fig. 2 includes a steering wheel, a display screen, and the like, and furthermore, a candidate object that a user gazes at in the image frame may be determined according to a viewpoint position of the user with respect to the image frame, where the candidate object may be a steering wheel in a video frame, or an object in a display screen, and the like.
In this embodiment, the eyeball position, the iris position, the eyeball track change rate, and the like of the user may be acquired according to a camera, where the camera may be a camera on a display screen displaying an image frame, or may be another camera disposed at another position, and the like, which is not limited herein.
The viewpoint position of the user in the image frame is determined based on the information, and the manner of determining the viewpoint position according to the eyeball position, the iris position and the eyeball track change rate can be realized by the prior art, and is not repeated herein, and further, the candidate object corresponding to the viewpoint position is determined in the image frame.
In some possible examples, a contour line of each object in the image frame may be identified according to algorithms such as contour identification, a position of an area where each object is located is determined according to the contour line, a viewpoint position is compared with the position of the area where each object is located, if the viewpoint position overlaps with the position of the area where the corresponding object is located, the corresponding object is determined to be a candidate object, and otherwise, the candidate object that the user does not watch on the current image frame is determined.
Step 102, acquiring a real-time physiological response signal of a user in an image frame display process.
It should be understood that even if the user gazes at the candidate object, it does not mean that the user is interested in the candidate object, and therefore, it is necessary to incorporate the real-time physiological response signal of the user during the image frame display.
The real-time physiological response signal may include one or a combination of more of a skin conductance signal, a heart rate signal, an electrocardiogram signal, an eye movement signal, and an electroencephalogram signal, which is not limited herein.
In an embodiment of the present invention, the physiological response signal of the user may be collected by a physiological response signal collecting device, wherein the physiological response signal collecting device may be different devices according to different application scenarios, for example, a device capable of detecting the physiological response signal of the user by directly contacting the skin of the user, such as a wrist band, a hat, a glove, a necklace, a face sticker, and the like, which include a physiological response signal sensor.
For example, when the application scenario is the driving scenario shown in fig. 2, the physiological response signal of the user may be collected by a physiological response signal collecting device disposed on the steering wheel or the roof of the vehicle.
And 103, determining emotion information of the user according to the real-time physiological response signal.
It should be noted that the real-time physiological response signal of the user reflects the emotional information of the user on the candidate object, and the emotional information may represent whether the user actually focuses on the corresponding candidate object or the interest level of the candidate object.
The emotional information includes, but is not limited to, emotional type (such as happy, exclusive, etc.), real-time concentration, etc.
In one embodiment of the invention, when the emotional information is real-time attentiveness, the attentiveness of each user may be represented in a specific numerical value, such as a numerical value in a percentile system, a larger numerical value represents that the user is more attentive, such as a level in a level system, a larger level represents that the user is more attentive, and such as a number of specific symbol marks (star symbols, flower symbols, heart symbols, etc.), a larger number of symbols represents that the user is more attentive.
It is understood that the physiological response signal of the user may truly reflect the concentration degree of the user, when the physiological response signal includes a skin conductance signal, the skin may present a certain resistance to current or voltage, and the magnitude of the resistance may vary with mood, and in general, in a relaxed state, the user may be currently on the nerves, the attention candidate is not very concentrated, the resistance of the skin of the human body is large, and thus the skin conductance signal is low, and in a mental stress or concentration, the attention candidate which the user may be currently on is low, and the resistance of the skin of the human body is high, because the sympathetic and parasympathetic nerves antagonistically adjust according to the change of the cognitive state of the brain, and the activity of the sympathetic and parasympathetic nerves affects the skin resistance.
It should be noted that, under different application scenarios, determining the concentration of the user according to the real-time physiological response signal of the user may be implemented in different ways, and the examples are as follows:
the first example:
in this example, the corresponding relationship between the attentiveness and the real-time physiological response signal is obtained and stored in advance according to a large amount of experimental data, so that after the real-time physiological response signal is obtained, the corresponding relationship is queried to obtain the attentiveness of the matched user.
The second example is:
and constructing a deep network model of the real-time physiological response signal according to a large amount of experimental data in advance, wherein the model inputs the physiological response signal and outputs the physiological response signal as the concentration degree of the user, so that the acquired real-time physiological response signal of the user is input into the deep network model to obtain the output concentration degree of the user.
The third example:
in this example, as shown in fig. 3, the step 102 includes:
step 201, extracting real-time concentration characteristic information of the real-time physiological response signal.
As a possible implementation manner, the concentration feature information is the concentration times, and the concentration times can be extracted by detecting the times that the physiological response signal is greater than the preset threshold value.
When the physiological response signal is a skin conductance signal, in practical application, the more interested the user is in the current candidate object, the more abundant the cranial nerve activity of the user is, and the stimulation of the candidate object changes the conductivity of the skin surface (the cause of the change may be sweat secretion, body surface electrolytes, blood circulation speed and the like), and the larger the variation (the conductivity is increased), so that the larger the skin conductance signal obtained by detection is.
Thus, in the present example, a preset threshold corresponding to the skin conductance signal is set in advance according to a large amount of experimental data, and the concentration times are extracted for the times when the skin conductance signal is extracted to be greater than the preset threshold.
As another possible implementation, the concentration time may be extracted at a time when the physiological response signal is detected to be greater than a preset threshold.
When the physiological response signal is a skin conductance signal, in practical application, the more interested the user is in the current candidate object, the more abundant the cranial nerve activity of the user is, and the stimulation of the candidate object changes the conductivity of the skin surface (the cause of the change may be sweat secretion, body surface electrolytes, blood circulation speed and the like), and the larger the variation (the conductivity is increased), so that the larger the skin conductance signal obtained by detection is.
Thus, in this example, the concentration time may also be extracted by detecting when the skin conductance signal is greater than a preset threshold.
As yet another possible implementation, the concentration intensity may be extracted by detecting an amplitude of the physiological response signal that is greater than a preset threshold.
When the physiological response signal is a skin conductance signal, in practical application, the more interested the user is in the current candidate object, the more abundant the cranial nerve activity of the user is, and the stimulation of the candidate object changes the conductivity of the skin surface (the cause of the change may be sweat secretion, body surface electrolytes, blood circulation speed and the like), and the larger the variation (the conductivity is increased), so that the larger the skin conductance signal obtained by detection is.
Thus, in this example, the amplitude of the skin conductance signal greater than the preset threshold may also be detected to extract concentration intensity, and if the preset threshold is a and the current skin conductance signal is B greater than a, then B-a may be taken as the concentration intensity.
In different application scenarios, the concentration characteristic information of the user acquired by the three examples may be used as a single reference factor for further determining the concentration degree of the user, a combination of any two of the concentration characteristic information of the user acquired by the three examples may be used as a reference factor for further determining the concentration degree of the user, and the concentration characteristic information of the user acquired by the three examples may be used as a reference factor for further determining the concentration degree of the user.
In addition, in order to ensure the accuracy of further determining the concentration degree of the user, in an embodiment of the present invention, the preset threshold value compared with the physiological response signal may be set according to the type of the user's constitution, for example, when the physiological response signal is a skin conductance signal, the cutin and dryness of the skin surface of a female user and a male user, or users of different ages are different, and thus the skin conductance signal measured under the same concentration degree is different.
Certainly, in this application scenario, the physique type of the user needs to be acquired in advance, and the acquisition mode may also be different according to different application scenarios, for example, image information of the user currently participating in the acquisition of the object of interest may be acquired by means of a camera, and the user identity of the corresponding position is identified according to the image information, so that the basic attribute and the trait of the user are associated with the skin sensor of the corresponding position, that is, different preset thresholds may be adopted according to the physique types of different users to acquire the concentration feature information.
Step 202, a preset algorithm is applied to calculate the concentration characteristic information of each user, and the real-time concentration degree of the user in the image frame display process is obtained.
Specifically, according to different application scenarios, the ways of calculating the concentration characteristic information of each user by using the preset algorithm to obtain the concentration degree of the user are different, and the following example is performed by combining different application scenarios:
scene one:
in this scenario, the concentration feature information of the user is a single feature information, such as only the concentration times, or the concentration time, or the concentration intensity.
Since the larger the data value corresponding to the concentration characteristic information of the user is, for example, the larger the concentration frequency is, the more interested the user is in the candidate object, the preset algorithm in the present scenario is a linear operation algorithm corresponding to the concentration characteristic information, for example, the algorithm may be Y ═ a × X, where Y is the concentration degree of the user, X is the data value corresponding to the concentration characteristic information, and a may be any number greater than 0.
However, it is possible to consider a difference in the correlation between the concentration characteristic information and the user concentration when the concentration characteristic information includes different contents, for example, when determining the concentration of the user, the reference meaning of the concentration time of the user is generally larger than the concentration number, because sometimes the user considers that the user does not concentrate on the current candidate object even though the user concentrates on the candidate object for a plurality of times, but the duration is short, and therefore, the a may correspond to a different weight value of the concentration characteristic information, for example, when the concentration characteristic information is the concentration number, the corresponding a is 0.6, and when the concentration characteristic information is the concentration time, the corresponding a is 0.8.
Scene two:
in this scenario, the concentration characteristic information of the user is a plurality of characteristic information, such as concentration times and concentration time, or concentration time and concentration intensity, or concentration times, concentration time and concentration intensity, and the like.
Since the larger the data value corresponding to the concentration characteristic information of the user is, for example, the larger the concentration times is, the more the user is concentrated on the candidate object, therefore, the corresponding preset algorithm is positively correlated with the data value corresponding to the concentration characteristic information of the user, for example, Y ═ a1 × 1+ … + an × Xn, where n is a positive integer greater than or equal to 2, a1 to an are positive numbers, a1 to an may be equal to or unequal, when a1 to an are unequal, the preset algorithm may be used to represent weighted values of different reference meanings of concentration characteristic information pairs of different users, and X1 to Xn represent data values corresponding to different user characteristic information.
Of course, in the actual operation process, the preset calculation in the scene may also be any algorithm expression that shows positive correlation of the data values corresponding to the concentration characteristic information of the user, which is not listed here.
In another embodiment of the present invention, when the emotion information is an emotion type, a curve of a real-time physiological response signal of the user during a display time period of the image frame may be constructed, the curve may be matched with a standard curve corresponding to each preset emotion type, and when the matching degree is greater than a certain value, the emotion information is determined to be the corresponding emotion type.
And step 104, when the emotion information is preset emotion information, determining that the candidate object is a target object interested by the user.
In this embodiment, preset emotion information corresponding to an object of interest of a user is preset, and the preset emotion information may be a preset emotion type, a real-time concentration range, or the like.
Referring to fig. 2, continuing to take a driving scene as an example, in this embodiment, the object that the user is interested in may be determined together with the sight line signal and the physiological response signal of the user, for example, it is found that the user is focusing on the display screen at this time, and the display screen is a navigation screen at this time, so that the sound of the navigation broadcast and the like may be improved for the user.
To sum up, the method for determining a target object according to the embodiment of the present invention obtains a viewpoint position of a user with respect to an image frame, and determines a candidate object corresponding to the viewpoint position in the image frame, further obtains a real-time physiological response signal of the user during an image frame display process, determines emotion information of the user according to the real-time physiological response signal, and finally determines that the candidate object is a target object interested by the user when the emotion information is preset emotion information. Therefore, the interested object of the user is determined together by combining the real-time physiological reaction signal and the gazing sight line information of the user, and the scene requirement based on the interested object of the user is met.
In practical application, in order to facilitate intuitive statistics of the user interest problems and the like, an object in which the user is interested can be marked in an image frame, for example, in a video, a new video is generated after the object in which the user is interested is marked, so that when the video is subsequently opened, a target object in which the user is interested can be intuitively obtained.
In one embodiment of the present invention, as shown in fig. 4, the method further comprises:
in step 301, a target area where a target object is located in an image frame is determined.
In this embodiment, the target area where the target object is located in the image frame may be identified according to an algorithm such as contour recognition.
Step 302, adding identification information in the target area.
The identification information may be a labeling box of a fixed shape that circumscribes the target area, such as a rectangular box, for example, an arrow-shaped indicator, and the like.
In some possible embodiments, the identification information may be added by adding a layer, that is, a layer including the identification information is generated, transparency of other non-region except for a region where the identification information is located may be 100, and then, a corresponding layer is added on the target region.
In summary, the method for determining the target object according to the embodiment of the present invention can mark the corresponding interested object in the image frame, so as to facilitate to visually know the position of the interested object.
Based on the embodiment, after the target object which the user is interested in is obtained, the interested object of the corresponding crowd can be determined according to the target object which the user is interested in, so that the scene requirement is met.
In one embodiment of the present invention, as shown in fig. 5, the method further comprises:
step 401, at least one target object which is interested by a plurality of users in a preset time period and the frequency of interest of each type of target object by each user are acquired.
The preset time period may be calibrated according to the scene requirement, for example, may be one month, for example, may be 10 days.
The method comprises the steps of obtaining at least one target object which is interested by a plurality of users in a preset time period, determining the type of each target object according to a preset type classification standard, wherein the type can be related to scene needs, and when the scene is a shopping scene, the corresponding type can be books, clothes, cosmetics and the like. In determining the type of each target object, the image features of each target object may be identified, and the type of each target object may be determined based on the image features.
In addition, the frequency of interest may be determined for each user as the number of occurrences of the same type of target object of interest.
Step 402, user portrait information of each user is obtained.
The user portrait information includes, but is not limited to, identity information, age information, professional information, and the like of the user.
In step 403, the types of all target objects and the total frequency of interest of each type for the same user portrait information are obtained.
In order to determine objects of interest to the same type of population, in the present embodiment, the types of all target objects of the same user profile information and the total frequency of interest of each type are acquired.
In step 404, when the total interest frequency is greater than the preset threshold, the type of the corresponding target object is determined as the type of the object of interest corresponding to the user portrait information.
In this embodiment, when the total frequency of interest is greater than the preset threshold, it indicates that the user corresponding to the user portrait information should be interested in the type of the target object, and thus, the type of the corresponding target object is determined to be the type of the object of interest corresponding to the user portrait information.
After determining that the type of the corresponding target object is the type of the interested object corresponding to the portrait information of the user, an evaluation report can be generated and sent to a corresponding platform to meet platform requirements, for example, the evaluation report is sent to an advertisement platform, so that the advertisement platform can recommend proper advertisements for the corresponding user according to the evaluation report, and the like, and the advertisement can be adjusted and optimized for audiences of different types in the aspects of manufacturing and putting of the advertisements, thereby improving the propagation effect of the advertisements.
In practical applications, as shown in fig. 6, the target object of interest and the user portrait information of 100 users may be collected, and a test report may be generated by combining the target object of interest and the user portrait information of 100 users, where the test includes types of the target object of interest that may include the same portrait information, and a recommendation level, a recommended book, and the like may be determined according to the frequency of the target object of interest.
In summary, the method for determining a target object according to the embodiment of the present invention determines the type of the target object interested by the same user portrait information, so as to provide relevant services according to the user portrait information in a targeted manner.
In order to implement the above embodiment, the present invention further provides a target object determining system, as shown in fig. 7, the target object determining system including: a physiological response signal acquisition device 100 and a processor 200, wherein the physiological response signal acquisition device 100 and the processor 200 are connected, wherein,
a physiological response signal acquisition device 100 for acquiring a viewpoint position of a user with respect to an image frame;
a processor 200 for determining object candidates corresponding to viewpoint positions in the image frame;
the physiological response signal acquisition device 100 is further configured to acquire a real-time physiological response signal of the user in an image frame display process;
and the processor 200 is further configured to determine emotion information of the user according to the real-time physiological response signal, and determine that the candidate object is a target object in which the user is interested when the emotion information is preset emotion information.
It should be noted that the foregoing explanation of the method for determining a target object is also applicable to the system for determining a target object according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
In order to achieve the above embodiments, the present invention also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the determination method of the target object as the above embodiments.
In order to implement the above embodiments, the present invention further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of determining a target object as in the above embodiment.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A method of determining a target object, comprising the steps of:
acquiring a viewpoint position of a user for an image frame, and determining a candidate object corresponding to the viewpoint position in the image frame;
acquiring a real-time physiological response signal of the user in the image frame display process;
determining emotion information of the user according to the real-time physiological response signal;
and when the emotion information is preset emotion information, determining that the candidate object is the target object interested by the user.
2. The method of claim 1, wherein the real-time physiological response signal comprises:
one or more of skin conductance signal, heart rate signal, electrocardiosignal and electroencephalogram signal.
3. The method of claim 1, wherein when the emotional information includes real-time concentration, the determining the emotional information of the user from the real-time physiological response signal comprises:
extracting real-time concentration characteristic information of the real-time physiological response signal;
and calculating the real-time concentration characteristic information by using a preset algorithm to obtain the real-time concentration degree of the user in the image frame display process.
4. The method of claim 3, wherein said extracting real-time concentration feature information of the real-time physiological response signal comprises:
detecting the times that the real-time physiological response signal is larger than a preset threshold value, and extracting concentration times; and/or the presence of a gas in the gas,
detecting the time when the real-time physiological response signal is greater than a preset threshold value, and extracting concentration time; and/or the presence of a gas in the gas,
and detecting the amplitude of the real-time physiological response signal which is greater than a preset threshold value, and extracting concentration intensity.
5. The method of claim 1, further comprising:
determining a target area in which the target object is located in the image frame;
and adding identification information in the target area.
6. The method of claim 5, wherein said adding identification information to said target area comprises:
generating a layer containing the identification information;
and adding the layer on the target area.
7. The method of claim 1, further comprising:
acquiring at least one target object which is interested by a plurality of users in a preset time period and the interest frequency of each user on each type of target object;
acquiring user portrait information of each user;
acquiring types of all target objects with the same user portrait information and total interest frequency of each type;
and when the total interest frequency is greater than a preset threshold value, determining the type of the corresponding target object as the type of the object of interest corresponding to the user portrait information.
8. A target object determination system, comprising: a physiological response signal acquisition device and a processor, wherein the physiological response signal acquisition device is connected with the processor, wherein,
the physiological response signal acquisition equipment is used for acquiring the viewpoint position of the user aiming at the image frame;
the processor is configured to determine a candidate object corresponding to the viewpoint position in the image frame;
the physiological response signal acquisition equipment is also used for acquiring a real-time physiological response signal of the user in the image frame display process;
the processor is further configured to determine emotion information of the user according to the real-time physiological response signal, and when the emotion information is preset emotion information, determine that the candidate object is a target object interested by the user.
9. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method for determining a target object according to any one of claims 1 to 7.
10. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of determining a target object according to any one of claims 1-7.
CN202011458215.XA 2020-12-10 2020-12-10 Target object determination method, target object determination system, storage medium, and electronic device Pending CN112613364A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011458215.XA CN112613364A (en) 2020-12-10 2020-12-10 Target object determination method, target object determination system, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011458215.XA CN112613364A (en) 2020-12-10 2020-12-10 Target object determination method, target object determination system, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
CN112613364A true CN112613364A (en) 2021-04-06

Family

ID=75233389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011458215.XA Pending CN112613364A (en) 2020-12-10 2020-12-10 Target object determination method, target object determination system, storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN112613364A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191293A (en) * 2021-05-11 2021-07-30 创新奇智(重庆)科技有限公司 Advertisement detection method, device, electronic equipment, system and readable storage medium
CN115994717A (en) * 2023-03-23 2023-04-21 中国科学院心理研究所 User evaluation mode determining method, system, device and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106681484A (en) * 2015-11-06 2017-05-17 北京师范大学 Image target segmentation system combining eye-movement tracking
CN108960937A (en) * 2018-08-10 2018-12-07 陈涛 Advertisement sending method of the application based on eye movement tracer technique of AR intelligent glasses
CN109151576A (en) * 2018-06-20 2019-01-04 新华网股份有限公司 Multimedia messages clipping method and system
CN110019853A (en) * 2018-06-20 2019-07-16 新华网股份有限公司 Scene of interest recognition methods and system
CN110998566A (en) * 2017-06-30 2020-04-10 Pcms控股公司 Method and apparatus for generating and displaying360 degree video based on eye tracking and physiological measurements

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106681484A (en) * 2015-11-06 2017-05-17 北京师范大学 Image target segmentation system combining eye-movement tracking
CN110998566A (en) * 2017-06-30 2020-04-10 Pcms控股公司 Method and apparatus for generating and displaying360 degree video based on eye tracking and physiological measurements
CN109151576A (en) * 2018-06-20 2019-01-04 新华网股份有限公司 Multimedia messages clipping method and system
CN110019853A (en) * 2018-06-20 2019-07-16 新华网股份有限公司 Scene of interest recognition methods and system
CN108960937A (en) * 2018-08-10 2018-12-07 陈涛 Advertisement sending method of the application based on eye movement tracer technique of AR intelligent glasses

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191293A (en) * 2021-05-11 2021-07-30 创新奇智(重庆)科技有限公司 Advertisement detection method, device, electronic equipment, system and readable storage medium
CN115994717A (en) * 2023-03-23 2023-04-21 中国科学院心理研究所 User evaluation mode determining method, system, device and readable storage medium
CN115994717B (en) * 2023-03-23 2023-06-09 中国科学院心理研究所 User evaluation mode determining method, system, device and readable storage medium

Similar Documents

Publication Publication Date Title
Bulagang et al. A review of recent approaches for emotion classification using electrocardiography and electrodermography signals
US10517521B2 (en) Mental state mood analysis using heart rate collection based on video imagery
Baltaci et al. Stress detection in human–computer interaction: Fusion of pupil dilation and facial temperature features
US20120083675A1 (en) Measuring affective data for web-enabled applications
US20150099987A1 (en) Heart rate variability evaluation for mental state analysis
US20100004977A1 (en) Method and System For Measuring User Experience For Interactive Activities
US20140214335A1 (en) Short imagery task (sit) research method
JP5624512B2 (en) Content evaluation apparatus, method, and program thereof
US20130151333A1 (en) Affect based evaluation of advertisement effectiveness
JP5119375B2 (en) Concentration presence / absence estimation device and content evaluation device
US20130102854A1 (en) Mental state evaluation learning for advertising
JP5789735B2 (en) Content evaluation apparatus, method, and program thereof
US20170105668A1 (en) Image analysis for data collected from a remote computing device
CN112541093A (en) Music recommendation method, system, storage medium and electronic device
JP2015229040A (en) Emotion analysis system, emotion analysis method, and emotion analysis program
US20150339539A1 (en) Method and system for determining concentration level of a viewer of displayed content
CN110019853A (en) Scene of interest recognition methods and system
CN112613364A (en) Target object determination method, target object determination system, storage medium, and electronic device
US20130238394A1 (en) Sales projections based on mental states
Das et al. Detection and recognition of driver distraction using multimodal signals
Papakostas et al. Understanding driving distractions: A multimodal analysis on distraction characterization
JP2015039487A (en) Visual line analysis system and method using physiological index
CN108887961B (en) Seat and seat-based concentration evaluation method
CN109033167B (en) Movie classification method and system
Soleymani Implicit and Automated Emtional Tagging of Videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination