WO2022209912A1 - Concentration value calculation system, concentration value calculation method, program, and concentration value calculation model generation system - Google Patents
Concentration value calculation system, concentration value calculation method, program, and concentration value calculation model generation system Download PDFInfo
- Publication number
- WO2022209912A1 WO2022209912A1 PCT/JP2022/012007 JP2022012007W WO2022209912A1 WO 2022209912 A1 WO2022209912 A1 WO 2022209912A1 JP 2022012007 W JP2022012007 W JP 2022012007W WO 2022209912 A1 WO2022209912 A1 WO 2022209912A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- concentration
- subject
- concentration value
- value calculation
- target
- Prior art date
Links
- 238000004364 calculation method Methods 0.000 title claims abstract description 137
- 238000003384 imaging method Methods 0.000 claims abstract description 16
- 230000007704 transition Effects 0.000 claims description 17
- 238000010801 machine learning Methods 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 16
- 230000010365 information processing Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 238000000034 method Methods 0.000 description 5
- 239000000470 constituent Substances 0.000 description 4
- 230000008921 facial expression Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 210000001508 eye Anatomy 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
Definitions
- the present disclosure relates to a concentration value calculation system and a concentration value calculation method.
- an information processing device that calculates a person's degree of concentration (also called a concentration value) is known.
- the maximum value of the concentration value is set to 100, and the sum of the amount of change in facial expression and the amount of change in action multiplied by the facing rate is subtracted to obtain the concentration value. calculate.
- the concentration value of a subject looking at one object is not necessarily high, and the concentration value of a subject looking at a plurality of objects is not necessarily low.
- the conventional information processing apparatus calculates a high concentration value even though it cannot be said that the person is concentrating. be.
- the conventional information processing apparatus calculates a low concentration value even though it can be said that the student is concentrating on the study.
- the conventional information processing apparatus has a problem that the accuracy of the calculated concentration value is low.
- the present disclosure provides a concentration value calculation system and a concentration value calculation method capable of calculating concentration values with higher accuracy.
- a concentration value calculation system includes: an image acquisition unit that acquires an image stream in which a subject is imaged; A region acquisition unit that acquires a plurality of concentration target regions, each of which corresponds to each of a plurality of concentration targets to be watched by the target person, and a concentration value calculation unit that calculates the concentration value of the target person. and the concentration value calculation unit calculates the face orientation or line-of-sight orientation of the subject from the acquired image stream, and calculates the calculated face orientation or line-of-sight orientation of the subject; Determining whether or not the target person is gazing at any one of the plurality of focus target objects based on the obtained plurality of focus target regions, Calculate and output the concentration value.
- an image stream in which a subject is imaged is acquired, and a plurality of concentration target areas, each of the plurality of concentration target areas, each of which is the subject's Obtaining a plurality of intensive target areas corresponding to each of a plurality of intensive target objects to be gazed at, and calculating the direction of the face or the direction of the line of sight of the target from the obtained image stream, and calculating the target person.
- a concentration value of the subject is calculated and output based on the determination result.
- one aspect of the present disclosure can be implemented as a program for causing a computer to execute the concentration value calculation method.
- it can be realized as a computer-readable recording medium storing the program.
- the concentration value can be calculated with higher accuracy.
- FIG. 1 is a diagram for explaining a usage example of a concentration value calculation system according to an embodiment.
- FIG. 2 is a block diagram showing the configuration of the concentration value calculation system according to the embodiment.
- FIG. 3 is a diagram for explaining a concentration target area according to the embodiment.
- FIG. 4 is a flow chart showing a concentration value calculation method according to the embodiment.
- FIG. 5 is a diagram for explaining determination of a gaze state according to the embodiment.
- FIG. 6 is a diagram for explaining transition state determination according to the embodiment.
- FIG. 7 is a diagram for explaining concentration values calculated in the embodiment.
- FIG. 8 is a diagram explaining another example of the focused object.
- FIG. 9 is a block diagram showing a functional configuration of a concentration value calculation unit according to another example.
- Patent Literature 1 As shown in Japanese Patent Laid-Open No. 2002-200012, an information processing apparatus that calculates a concentration value of a person is conventionally known.
- the information processing device disclosed in Patent Literature 1 can calculate the concentration value only in a specific situation. Specifically, in the information processing apparatus, the more concentrated one object is viewed, the higher the calculated concentration value. Therefore, in order to use the information processing device for a target person who is working and calculate the concentration value of the target person for the work, it is necessary that the work performed by the target person is a work performed while gazing at one object. , and can only be applied to limited applications.
- an office worker who performs office work using a computer equipped with a display unit and a sub-display naturally selects both the display unit provided in the computer and the sub-display (that is, alternatively to). Therefore, in a configuration in which a concentration value is calculated based only on gazing at a display unit provided in a computer, an inaccurate concentration value may be calculated in a work scene in which the user gazes at the sub-display even in a concentrated state.
- a student who studies using a tablet terminal that reproduces a lecture movie, a text, and a notebook selectively gazes at all of the tablet terminal, the text, and the notebook. Therefore, in a configuration in which a concentration value is calculated based only on gazing at a tablet terminal, an inaccurate concentration value may be calculated in a work situation in which the user gazes at a text or a notebook even in a concentrated state.
- the concentration value calculated using the information processing device cannot be said to have high accuracy.
- the present disclosure has been made in view of the above circumstances, and provides a concentration value calculation system and the like that can be applied to a wide range of uses and that can calculate a subject's concentration value with high accuracy.
- each figure is a schematic diagram and is not necessarily strictly illustrated. Therefore, scales and the like are not always the same in each drawing.
- the same reference numerals are assigned to substantially the same configurations, and overlapping descriptions are omitted or simplified.
- the first concentrated object and the second concentrated object are exemplified as the plurality of concentrated objects, but there may be three or more concentrated objects.
- FIG. 1 is a diagram for explaining a concentration value calculation system according to an embodiment.
- a concentration value calculation system 100 (see FIG. 2 to be described later) according to the present embodiment is built in, for example, a computer (an example of a first concentration object 97) used by a subject 99. Realized.
- a camera, display, etc. mounted on the computer can be used as a part of the concentration value calculation system 100.
- an externally connected camera attached to a sub-display (an example of the second focused object 98) used by the subject 99 is used as the imaging device 20.
- the concentration value calculation system 100 can be incorporated into the computer or the like used by the subject 99, the input to the concentration value calculation system 100 can be obtained using a camera, and the Output can be presented using a display. Further, when the work performed by the subject 99 is work using a computer, the computer used for the work can be used to calculate the concentration value of the subject 99 in parallel with the work. It should be noted that the centralized value calculation system 100 may have a part of processing functions, information storage functions, etc. implemented by a cloud server or the like.
- the concentration value calculation system 100 is a system that uses an image stream in which the subject 99 is captured and calculates the concentration value of the subject 99 on the image stream. Therefore, if the concentration value calculation system 100 can acquire an image stream in which the subject 99 is imaged, the concentration value calculation system 100 can concentrate even in a situation where the gaze target of the subject 99 shifts to each of two or more focused objects. A value can be calculated. In other words, the concentration value calculation system 100 can be applied to the subject 99 who gazes at each of a plurality of concentration objects in a time division manner.
- FIG. 2 is a block diagram showing the configuration of the concentration value calculation system according to the embodiment.
- the concentration value calculation system 100 includes an arithmetic device 10, an imaging device 20, a storage device 30, and an output device 40.
- Each device constituting the centralized value calculation system 100 may be housed in one housing or the like and integrated, or may be connected to each other via a communication line to form a plurality of individual devices. may be implemented as a device of
- the computing device 10 has an image acquisition unit 11 , an area acquisition unit 12 , and a concentration value calculation unit 13 .
- Arithmetic device 10 is implemented by a processor, a memory, and a program executed using these.
- the computing device 10 is installed, for example, as one of the functions in a computer, which is an example of the first concentration object 97 .
- the image acquisition unit 11 is a functional unit that acquires an image stream in which the subject 99 is imaged.
- the image acquisition unit 11 acquires an image stream in which the subject 99 is imaged by the imaging device 20 .
- the image acquisition unit 11 may be integrated with the imaging device 20 .
- the concentration value of the subject 99 is immediately calculated from the image stream acquired by the image acquisition section 11 .
- “immediately” includes a delay of several milliseconds to several seconds considering the time required for calculation processing, data transfer, and the like.
- the image acquisition unit 11 may be implemented in any way as long as it can acquire an image stream.
- the image acquisition unit 11 may acquire an image stream stored in the storage device 30 in the past. In this way, the measurement of the concentration value by the concentration value calculation system 100 does not have to be instantaneous.
- the image acquisition unit 11 transmits the acquired image stream to the concentration value calculation unit 13 .
- the area acquisition unit 12 acquires a plurality of concentration target areas corresponding to a plurality of concentration objects to be watched by the subject 99, including the first concentration object 97 and the second concentration object 98.
- the concentration target area is one area set for each one concentration target object, and when the target person 99 gazes at the concentration target object, the direction of the face of the target person 99 falls within the area. It is a set area. Such a concentration target area will be described.
- FIG. 3 is a diagram for explaining a concentration target area according to the embodiment. In FIG. 3, the concentration target areas set for the first concentration target object 97 and the second concentration target object 98 when the target person 99 is viewed from above are indicated by dot hatching.
- one focused target area is set for each of the first focused target object 97 and the second focused target object 98 .
- a first concentration target area 97a is set for the first concentration target object 97
- a second concentration target area 98a is set for the second concentration target object 98, respectively.
- the focused target area is, for example, a virtual line connecting the center of the visual field 99a of the target person 99 (that is, the position of the target person 99, more specifically, the position of the eyes of the target person 99) to one end of the focused target, It is set between the virtual line connecting to the other end. That is, the focused target area is set within a predetermined angular range in the field of view 99 a of the subject 99 .
- the concentration target area is an area dependent on the target person 99 in this way, it is preferable to set it for each target person 99 . Therefore, for example, an operation for setting a concentration target region for each subject is performed prior to calculation of the concentration value.
- the concentration target area can be determined based on the orientation of the face of the subject 99. is determined experimentally. It should be noted that the calculation of the orientation of the face of the target person 99 is performed based on the region determination image acquired by imaging the target person 99 with the imaging device 20 when the above instruction is presented. Since this operation is the same as the calculation of the face direction of the subject 99 from the image stream performed by the arithmetic unit 10, the explanation here will be omitted by referring to the explanation of the calculation of the concentration value described later. do.
- the second concentration target area 98a is determined.
- Information about the determined concentration target area is stored in the storage device 30 in advance.
- the area acquisition unit 12 acquires the necessary concentration target area by referring to the information stored in the storage device 30 .
- acquiring the concentration target area means reading and acquiring information indicating the concentration target area.
- the concentration target area determined by actual measurement may be obtained as it is and used for calculating the concentration value.
- a first concentration target region 97a and a second concentration target region 98a are acquired as the concentration target regions.
- an instruction to gaze at four corners of the focused target is given, but depending on the shape of the focused target, five or more corners may be gazed at, or in a horizontal or vertical direction. Only one end and the other end of the focused object in a predetermined direction may be gazed at.
- the focused object is of a size that fits in the effective visual field of the target person 99 (that is, the visual field area in which information can be effectively obtained that spreads from the central visual field in one direction to the surroundings), simply gaze at the center of the focused object. Only can be used. In this case, an area of about plus or minus 10 degrees from the direction of the face of the subject 99 is automatically determined as the concentration target area.
- the concentration target area may be determined by machine learning the direction in which the target person 99 is likely to gaze during work.
- the focus target area can be automatically determined by detecting the position of the center of the field of view 99a of the subject 99 without actually depending on a physical focus target such as a computer or a sub-display. .
- the concentration target area may be determined according to the space used by the target person 99. For example, a subject 99 using a desk on which a computer with two displays is installed is naturally expected to gaze at the two displays. Therefore, if the position of the center of the field of view 99a of the subject 99 is detected, the focused target area can be determined based on the positional relationship between the imaging device 20 and the two displays.
- the concentration value calculation unit 13 has a function of calculating the concentration value of the subject 99 based on the image stream acquired by the image acquisition unit 11 and the concentration target area acquired by the area acquisition unit 12. Department. The calculation of the concentration value, which is the main function of the concentration value calculator 13, will be described later in detail.
- the concentration value of the subject 99 calculated by the concentration value calculator 13 is output and presented on, for example, a computer screen.
- the calculated concentration value of the target person 99 is output to and stored in a server device (not shown) or the like, and can be made available for confirmation by a manager or the like who is in a position to supervise the work of the target person 99. .
- the imaging device 20 is a camera that captures images as described above.
- the imaging device 20 continuously acquires images to generate and output an image stream.
- the storage device 30 is a device for storing information such as a semiconductor memory.
- the storage device 30 stores information such as the concentration target area, and receives reference to the information by the area acquisition unit 12 or the like.
- the output device 40 is, for example, a display controller, converts the information into a presentation image in order to present the information of the concentration value calculated by the concentration value calculation unit 13 on the display, and outputs a signal for presenting the presentation image. output to
- FIG. 4 is a flow chart showing a concentration value calculation method according to the embodiment.
- the concentration value calculation system 100 may perform some operations not shown in FIG. 4, such as determination of a concentration target region.
- the image acquisition unit 11 acquires an image stream (S101).
- An image stream is a group of images that are captured in sequence. Therefore, the image acquisition unit 11 acquires an image stream by sequentially acquiring a plurality of continuously captured images.
- the area acquisition unit 12 acquires a plurality of concentration target areas corresponding to each of the plurality of concentration objects by referring to the storage device 30 (S102). Acquisition of the concentration target area (S102) may be performed before acquisition of the image stream (S101). Thus, the order of some operations of the centralized value calculation system 100 may be changed.
- the concentration value calculator 13 calculates the concentration value of the subject 99 based on the acquired image stream and the acquired concentration target area (S103). Specifically, the concentration value calculation unit 13 calculates the facial orientation and the like of the subject 99 from the acquired image stream. Based on the corresponding plurality of concentration target areas, it is determined whether or not the target person 99 is in a gaze state in which one of the plurality of concentration targets is being gazed at, and the concentration of the target person 99 is determined based on the determination result. Calculate and output the value.
- concentration value calculator 13 [Calculation of Concentration Value by Concentration Value Calculation System] The operation of the concentration value calculator 13 described above will be described in more detail below. First, calculation of the orientation of the face of the subject 99 will be described. The subject's 99 face orientation is calculated based on the captured image stream. The concentration value calculation unit 13 inputs the obtained image stream to a machine-learned face orientation calculation model, thereby obtaining the face orientation of the subject 99 on the image stream as an output. More specifically, the face orientation calculation model outputs the face orientation of the subject 99 as a normal vector of the front side of the face.
- the output normal vector of the front side of the face is The relative angle is treated as the orientation of the face of the subject 99, and more specifically, the orientation of said subject's face with respect to the imaging device capturing the image stream.
- the face orientation calculation model is an example of the orientation calculation model, and outputs the face orientation for each of a plurality of images forming the image stream, so that the face orientation of the subject 99 can be changed according to the image stream. be able to.
- the face orientation calculation model consists of a teacher image of the subject 99 (corresponding to the image stream) and the correct face orientation data corresponding to the teacher image (corresponding to the relative angle indicating the face orientation of the subject). ) is a trained model that has been trained in advance using a dataset that is a combination of
- the calculation of the face orientation of the subject 99 is not limited to the example using the face orientation calculation model described above.
- the feature points of the face of the subject 99 (the corners of the eyes, the tip of the nose, the corners of the mouth, the chin, etc.) of the subject 99 are used to fit the subject to a three-dimensional model.
- the orientation of the person's face 99 may be calculated.
- the concentration value calculator 13 may calculate the orientation of the face of the subject 99 from the image stream using any existing technique.
- the orientation of the line of sight of the subject 99 can be used instead of the orientation of the face of the subject 99.
- the direction of the line of sight of the subject 99 can be calculated by image analysis centering on the eyeball of the subject 99 .
- the orientation of the line of sight of the target person 99 can be handled in substantially the same way as the orientation of the target person's 99 face. Therefore, an example using the above-mentioned direction of sight line will be explained by appropriately reading "direction of sight line" for "direction of face” in the description of the present disclosure.
- FIG. 5 is a diagram for explaining determination of a gaze state according to the embodiment.
- FIG. 6 is a diagram for explaining determination of a transitional state according to the embodiment. 5 and 6 show the subject 99 from the same viewpoint as in FIG. In FIG. 5, the subject 99 is in a fixation state. Also, in FIG. 6, the subject 99 is in a transitional state.
- the face orientation of the subject 99 described above is indicated as direction 99b by the dashed arrow (that is, the normal vector) in FIGS.
- the concentration value calculation unit 13 determines that the target person 99 is in the gaze state if the direction 99b is within the concentration target area. In other words, the direction 99b only needs to fall within the angular range of either the first focused target area 97a or the second focused target area 98a.
- the face orientation may have a certain angular range. That is, the face direction may be in range 99c or the like.
- the direction 99b is the direction that bisects the range 99c (that is, the center line).
- a non-gazing state that is, a state of looking away can be determined.
- the target person 99 at this time is a plurality of concentration target objects. It is determined that there is a transition state in which the line of sight transitions between two of them. In this way, in the calculation of the concentration value in the present embodiment, by classifying whether the target person 99 is in the gazing state, the transitional state, or the non-gazing state, the To calculate a concentration value with high accuracy.
- the determination as to whether the subject is in the transitional state or the non-gazing state may be calculated based on the movement vector of the subject's 99 face direction over time.
- the transition of the face orientation of subject 99 from a region that is neither of the plurality of focus target regions, nor to a region that is none of the plurality of focus target regions happens to be two focus target regions.
- the duration of a state in which the target person 99 is stationary at a fixed position may be taken into consideration. In this example, even if the direction 99b is located between the first concentration target region 97a and the second concentration target region 98a, if this state continues for a certain period of time, it is considered a non-gazing state. It should be judged.
- one region and another region is a line segment that connects an arbitrary point in one region and an arbitrary point in another region. It refers to the area to which it does not belong.
- the concentration value calculation unit 13 further calculates a performance value, which is a unit concentration value of the subject 99, from the acquired image stream.
- the performance value is a numerical value that is the base of the concentration value calculated from the body movement, posture, facial expression, etc. of the subject 99 on the image. Any existing technique may be used to calculate the performance value. For example, in the above body movement, if an image is acquired in which body movement is greater in number and degree than in the previous image, the performance value of the subject 99 is calculated to be low. Further, for example, in the above posture, a performance value is linked in advance for each posture of the subject 99, and the linked performance value is calculated by matching the posture on the acquired image.
- the performance value is calculated by summing the numerical values of the features seen in the subject 99 on the image.
- the state of the target person 99 may be taken into account.
- the state of the subject 99 is further integrated to optimize the performance value and calculate the concentration value. For example, even if the performance value is a high value, the concentration value should be calculated to be low if the target person 99 is actually in a non-gazing state.
- the concentration value calculation unit 13 multiplies the calculated performance value by the first coefficient when it is determined that the target person 99 is in the gaze state, and when it is determined that the target person 99 is in the transition state, multiplies the calculated performance value by a second coefficient less than or equal to the first coefficient, and if it is determined that the target person 99 is not in the gaze state and is not in the transition state, the calculated performance value is multiplied by the second coefficient
- a concentration value of the subject 99 is calculated by multiplying by a third coefficient that is smaller than .
- condition of the subject 99 and each coefficient for optimizing the performance value may contribute to the habits that the subject 99 can take when concentrating, so it is possible to conduct a preliminary test in advance.
- Each coefficient may be set for each subject 99 .
- the transitional state may be treated in the same way as the gaze state. That is, the first coefficient and the second coefficient may have the same value.
- FIG. 7 is a diagram for explaining concentration values calculated in the embodiment.
- FIG. 7 shows a graph of concentration values calculated for each image forming an image stream, that is, per time.
- the target person 99 is in a transitional state (solid line graph), or in a gaze state and a non-gazing state.
- An example is shown in which the calculated concentration value differs depending on which one (broken line graph).
- the reliability of the performance value calculated when the target person 99 is in the transitional state is lower than in the case where the subject 99 is in the gaze state and the non-gazing state.
- a concentration value is calculated to reduce the influence of the performance value on the concentration value.
- the performance value at the first point in time and the performance value at the second point in time are used to calculate the concentration value at the second point in time.
- the second point in time is a point in time that follows the first point in time, and includes the minimum unit period for calculating the concentration value in the concentration value calculation system 100 .
- the minimum unit period for calculating the concentration value in the concentration value calculation system 100 is, for example, one second. Therefore, the concentration value for 1 second is calculated at the second time immediately after the concentration value for 1 second is calculated at the first time.
- the first term on the right side of the above formula is the first value obtained by multiplying the performance value at the first point in time when it is determined that the subject 99 is not in the transitional state by the first weighting factor, or the first value when the subject 99 is in the transitional state.
- the third value obtained by multiplying the performance value at the first point in time by the third weighting factor is shown.
- the second term on the right side of the above equation is the second value obtained by multiplying the performance value at the second time point by the second weighting factor when it is determined that the subject 99 is not in the transition state, or A fourth value obtained by multiplying the performance value at the second time point by a fourth weighting factor when it is determined to be in the state is shown.
- ⁇ in the formula is a weighting factor for determining which of the performance value at the first time point and the performance value at the second time point should be emphasized.
- the performance values at the first point in time need not be considered as much. That is, since the performance value at the second point in time in this case has sufficient reliability, it is appropriate to increase the weight of the performance value at the second point in time. Therefore, ⁇ should be set to a relatively small value. ⁇ should be a value greater than 0 and less than 1 in order to satisfy the above equation. If ⁇ is set to 0, the concentration value at the second point in time can be calculated from only the performance value at the second point in time without considering the performance value at the first point in time.
- ⁇ should be set to a relatively large value.
- the reliability of the performance value may be set as a numerical value (that is, a weighting factor) based on the state of the subject 99, and the previous performance value may be incorporated into the calculation of the concentration value for one minimum unit period.
- the concentration value of the target person 99 can be calculated with higher accuracy while considering the state of the target person 99 when there are a plurality of focused objects.
- the concentration value calculation system 100 includes the image acquisition unit 11 that acquires an image stream in which the subject 99 is imaged, and a plurality of concentration target regions, each of which is a plurality of concentration target regions. a region acquisition unit 12 for acquiring a plurality of concentration target regions, each corresponding to each of a plurality of concentration targets to be watched by the target person 99; and a concentration value calculation unit 13 for calculating the concentration value of the target person 99.
- the concentration value calculation unit 13 calculates the face direction or the line-of-sight direction of the subject 99 from the acquired image stream, and the calculated face direction or line-of-sight direction of the subject 99 and the acquired It is determined whether or not the target person 99 is gazing at any one of the plurality of focused objects based on the determined concentration target areas, and the concentration value of the target person 99 is determined based on the determination result. Calculate and output.
- Such a concentration value calculation system 100 can calculate the concentration value of the subject 99 based on whether the subject 99 is gazing at any one of the plurality of concentration objects.
- the concentration value is relatively high, and when the target person 99 is not gazing at the focused object, the concentration value is relatively low. can be done. That is, even when there are two or more focused objects, a high concentration value is calculated by gazing at one of the plurality of focused objects.
- the concentration value is calculated to be low.
- the concentration value calculation system 100 can thus calculate the concentration value with higher accuracy even when there are a plurality of concentration objects.
- each of the plurality of concentration target areas is determined as an area imaged when the subject 99 is presented with an instruction to gaze at the concentration target corresponding to the concentration target area. It may be determined based on the image and stored in advance in the storage unit, and the area acquiring unit 12 may acquire a plurality of concentration target areas by referring to the storage unit.
- each of the plurality of concentration target areas is set to the target person 99 who gazes at one end in response to an instruction to gaze at one end and the other end of the concentration target object corresponding to the concentration target area. It may be determined as an area between the face direction or line of sight direction and the face direction or line of sight direction of the subject 99 who gazes at the other end, and stored in advance in the storage unit.
- a focused target area determined as an area between the orientation of the face or the orientation of the line of sight of the target person 99 when gazing at one end and the other end.
- the concentration target area is set in advance for each space used by the subject 99 and stored in the storage unit, and the area acquisition unit 12 refers to the storage unit to determine the plurality of concentration target areas. may be obtained.
- the concentration target area is set in advance for each subject 99 and stored in the storage unit, and the area acquisition unit 12 may acquire a plurality of concentration target areas by referring to the storage unit. good.
- the orientation of the subject's 99 face is calculated from the acquired image stream as a normal vector of the subject's 99 face, and each of the plurality of concentrated target regions is centered on the subject's 99 position. has a predetermined angle range, and the concentration value calculation unit 13 determines that the target person 99 corresponds to the concentration target region having the predetermined angle range when the calculated normal vector is within the predetermined angle range. It may be determined that the gaze state is gazing at the focused object.
- the normal vector of the target person's 99 face can be calculated from the image stream, and it can be determined whether or not the target person's 99 is in a gaze state.
- the concentration value calculation unit 13 determines that the target person 99 is not in the gaze state, it further determines whether the target person 99 is in a transition state in which the line of sight transitions between two of the plurality of concentration objects. may be determined, and the concentration value of the subject 99 may be calculated based on the determination result.
- the target person 99 when the target person 99 is not in the gaze state, it is not simply determined as the non-gazing state, but it is determined whether the line of sight transitions between a certain concentration target area and another concentration target area. I can judge. A concentration value can be calculated based on this determination result.
- the concentration value calculation unit 13 further calculates a performance value, which is a unit concentration value, of the target person 99 from the acquired image stream, and when it is determined that the target person 99 is in the gaze state, , the calculated performance value is multiplied by the first coefficient, and if it is determined that the target person 99 is in a transitional state, the calculated performance value is multiplied by a second coefficient that is equal to or less than the first coefficient, and the target person 99 is in the gaze state. If it is determined that it is not, and if it is determined that the state is not transitional, the concentration value of the subject 99 may be calculated by multiplying the calculated performance value by a third coefficient that is smaller than the second coefficient.
- a performance value which is a unit concentration value
- the concentration value is the highest value based on the performance value in the gaze state, the following value in the gaze state based on the performance value in the transition state, and the non-gazing state If there is, it is calculated based on the performance value so that it will be a smaller value than in the case of transitional state.
- the concentration value calculation unit 13 further calculates a performance value, which is a unit concentration value, of the subject 99 from the acquired image stream at the first time point and at a second time point following the first time point,
- a performance value which is a unit concentration value
- the calculated performance value at the first time point is and a second value obtained by multiplying the calculated performance value at the second time point by a second weighting factor that is the difference between the first weighting factor and 1.
- a third value obtained by multiplying the calculated performance value at the first time point by a third weighting factor different from the first weighting factor, and the calculated A concentration value of the subject 99 is calculated by adding a fourth value obtained by multiplying the performance value at the second time point by a fourth weighting factor which is a difference between the third weighting factor and 1, and
- the 1 weighting factor and the 3rd weighting factor may be numerical values greater than 0 and less than 1.
- the concentration value at the second point in time taking into consideration the performance values calculated at the first point in time and the second point in time.
- the weight that is, the first A more accurate concentration value can be calculated by changing the degree to which the performance value at the time point affects the performance value at the second time point.
- the concentration value calculation method obtains an image stream in which the subject 99 is captured, and obtains a plurality of concentration target areas, each of which is a concentration target area that the subject 99 is gazing at.
- a plurality of focused target areas corresponding to each of a plurality of focused targets are acquired, the direction of the face or the direction of the line of sight of the target person 99 is calculated from the acquired image stream, and the calculated face direction of the target person 99 is calculated.
- the concentration value of the subject 99 is calculated and output.
- Such a concentration value calculation method can provide the same effects as the concentration value calculation system described above.
- it may be a program for causing a computer to execute the concentration value calculation method described above.
- the centralized value calculation system may be realized only by the arithmetic device by providing only the arithmetic device described above and connecting the arithmetic device to an external imaging device, an external storage device, and an external output device.
- the imaging device, storage device, and output device are not essential components.
- each of the plurality of focused objects may not be a physical object.
- the first application window 97b displayed on the display 96 of the computer may be the first focus object
- the second application window 98b may be the second focus object
- the subject matter of the present disclosure may be applied.
- FIG. 9 is a block diagram showing a functional configuration of a concentration value calculation unit according to another example.
- the computing device 10 includes a concentration value calculator 13a in place of the concentration value calculator 13 in the embodiment.
- the concentration value calculation unit 13a can directly output the concentration value of the subject 99 by inputting the obtained image stream and the obtained concentration target region to the concentration value calculation model 13b. Then, as the concentration value of the subject 99, the output result output from the concentration value calculation model 13b is output as it is.
- the concentration value calculation model 13b is a learning model in which the correlation between the image stream, the concentration target region, and the concentration value is learned in advance by machine learning.
- the concentration value calculation system 100 further includes a model generation unit 13c for generating (learning) the concentration value calculation model 13b.
- the model generation unit 13c In order to generate the concentration value calculation model 13b, the model generation unit 13c generates input data corresponding to the two pieces of information of the image stream and the concentration target area, and correct (or correct and incorrect) output data for the input data. and are used as training data.
- an image stream and a teacher image/teacher region D1 corresponding to the concentration target region are input for learning.
- the teacher concentration value D2 of the subject 99 is input as the output data for learning.
- the concentration value calculation model 13b is adjusted using a data set combining the teacher image/teacher region D1 and the teacher concentration value D2.
- the weighting coefficient assigned to each neuron is adjusted by a method such as back propagation to obtain an appropriate value for the input data.
- Machine learning is performed to obtain output data.
- the concentration value calculation unit 13a inputs the acquired image stream and the acquired concentration target area to the learned concentration value calculation model 13b, so that an appropriate concentration value of the subject 99 is output. In this way, the calculation of the concentration value by the concentration value calculation unit 13a can also be realized using a machine-learned learning model.
- the concentration value calculation system 100 including the model generating unit 13c has been described. After that, it is also possible to realize a configuration in which only the recorded concentration value calculation model 13b is used without going through the learning process. is also possible.
- the present disclosure can be realized not only as a centralized value calculation system, but also as a program including, as steps, processes performed by each component of the centralized value calculation system, and a computer-readable recording medium recording the program.
- the program may be pre-recorded on a recording medium, or may be supplied to the recording medium via a wide area network including the Internet.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Entrepreneurship & Innovation (AREA)
- Economics (AREA)
- Educational Administration (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
Description
特許文献1に示すように、従来、人の集中値を算出する情報処理装置が知られている。特許文献1に開示された情報処理装置は、特定の状況においてのみ集中値を算出することができる。具体的には、当該情報処理装置では、1つの対象物を集中して見ている程、算出される集中値が高い値になる。したがって、作業中の対象者に対して当該情報処理装置を用い、作業に対する対象者の集中値を算出するためには、対象者の行う作業が1つの対象物に注視しながら行う作業である必要があり、限られた用途にしか適用できない。 (Knowledge leading to the present disclosure)
As shown in Japanese Patent Laid-Open No. 2002-200012, an information processing apparatus that calculates a concentration value of a person is conventionally known. The information processing device disclosed in Patent Literature 1 can calculate the concentration value only in a specific situation. Specifically, in the information processing apparatus, the more concentrated one object is viewed, the higher the calculated concentration value. Therefore, in order to use the information processing device for a target person who is working and calculate the concentration value of the target person for the work, it is necessary that the work performed by the target person is a work performed while gazing at one object. , and can only be applied to limited applications.
[集中値算出システムの構成]
はじめに、図1を用いて、実施の形態に係る集中値算出システムについて説明する。図1は、実施の形態に係る集中値算出システムを説明する図である。 (Embodiment)
[Configuration of centralized value calculation system]
First, using FIG. 1, a concentration value calculation system according to an embodiment will be described. FIG. 1 is a diagram for explaining a concentration value calculation system according to an embodiment.
次に図4を参照して、集中値算出システム100の動作について説明する。図4は、実施の形態に係る集中値算出方法を示すフローチャートである。なお、集中値算出システム100は、図4に示されない、例えば、集中対象領域の決定などの一部の動作が実行される場合がある。図4に示すように、集中値算出システム100の動作が開始されると、画像取得部11は、画像ストリームを取得する(S101)。画像ストリームは、連続的に撮像された複数の画像からなる画像群である。したがって、画像取得部11は、連続的に撮像された複数の画像を順次取得することで、画像ストリームの取得を行う。 [Operation of centralized value calculation system]
Next, operation of the concentration
以下、上記に説明した集中値算出部13の動作についてさらに詳しく説明する。まず、対象者99の顔の向き等の算出について説明する。対象者99の顔の向きは、取得された画像ストリームに基づいて算出される。集中値算出部13では、取得された画像ストリームを、機械学習済みの顔の向き算出モデルに入力することで、画像ストリーム上の対象者99の顔の向きを出力として得ることができる。より具体的には、顔の向き算出モデルは、対象者99の顔の向きを顔の正面側の法線ベクトルとして出力する。そして、集中値算出システム100においては、対象者99と、対象者99を撮像する撮像装置20とを直線で結ぶ方向を0度としたときの、出力された顔の正面側の法線ベクトルの相対的な角度を、対象者99の顔の向き、より詳しくは、画像ストリームを撮像する撮像装置に対する前記対象者の顔の向きとして扱う。 [Calculation of Concentration Value by Concentration Value Calculation System]
The operation of the
以上説明したように、本実施の形態における集中値算出システム100は、対象者99が撮像された画像ストリームを取得する画像取得部11と、複数の集中対象領域であって、複数の集中対象領域の各々が、対象者99が注視すべき複数の集中対象物の各々に対応する、複数の集中対象領域を取得する領域取得部12と、対象者99の集中値を算出する集中値算出部13と、を備え、集中値算出部13は、取得された画像ストリームから、対象者99の顔の向き又は視線の向きを算出し、算出した対象者99の顔の向き又は視線の向きと、取得された複数の集中対象領域とに基づいて、対象者99が複数の集中対象物のいずれかを注視している注視状態か否かを判定し、判定結果に基づいて対象者99の集中値を算出して出力する。 [Effects, etc.]
As described above, the concentration
以上、本開示に係る集中値算出システム、集中値算出方法、およびプログラムにについて、上記実施の形態等に基づいて説明したが、本開示は、上記の実施の形態に限定されるものではない。例えば、各実施の形態等に対して当業者が思いつく各種変形を施して得られる形態や、本開示の趣旨を逸脱しない範囲で各実施の形態における構成要素および機能を任意に組み合わせることで実現される形態も本開示に含まれる。 (Other embodiments)
Although the concentration value calculation system, the concentration value calculation method, and the program according to the present disclosure have been described above based on the above embodiments and the like, the present disclosure is not limited to the above embodiments. For example, a form obtained by applying various modifications that a person skilled in the art can think of for each embodiment, etc., or a form obtained by arbitrarily combining the constituent elements and functions of each embodiment within the scope of the present disclosure. Also included in the present disclosure is the form of
12 領域取得部
13 集中値算出部
99 対象者
100 集中値算出システム REFERENCE SIGNS
Claims (14)
- 対象者が撮像された画像ストリームを取得する画像取得部と、
複数の集中対象領域であって、前記複数の集中対象領域の各々が、前記対象者が注視すべき複数の集中対象物の各々に対応する、複数の集中対象領域を取得する領域取得部と、
前記対象者の集中値を算出する集中値算出部と、を備え、
前記集中値算出部は、
取得された前記画像ストリームから、前記対象者の顔の向き又は視線の向きを算出し、
算出した前記対象者の顔の向き又は視線の向きと、取得された前記複数の集中対象領域とに基づいて、前記対象者が前記複数の集中対象物のいずれかを注視している注視状態か否かを判定し、
判定結果に基づいて前記対象者の集中値を算出して出力する
集中値算出システム。 an image acquisition unit that acquires an image stream in which a subject is imaged;
a region acquisition unit configured to acquire a plurality of concentration target regions, each of which corresponds to each of a plurality of concentration targets to be gazed at by the subject;
a concentration value calculation unit that calculates the concentration value of the subject,
The concentration value calculation unit
calculating the orientation of the face or the orientation of the line of sight of the subject from the acquired image stream;
Based on the calculated face direction or line-of-sight direction of the subject and the plurality of acquired focus target areas, whether the subject is in a gaze state gazing at any one of the plurality of focused objects determine whether or not
A concentration value calculation system for calculating and outputting a concentration value of the subject based on a determination result. - 前記複数の集中対象領域の各々は、前記対象者に対して当該集中対象領域に対応する集中対象物を注視させる指示が前記対象者に対して提示されたときに撮像された領域決定画像に基づいて決定されて、あらかじめ記憶部に記憶されており、
前記領域取得部は、前記記憶部を参照して、前記複数の集中対象領域を取得する
請求項1に記載の集中値算出システム。 Each of the plurality of concentration target areas is based on an area determination image captured when the subject is presented with an instruction to gaze at a concentration target corresponding to the concentration target area. and is stored in advance in the storage unit,
The concentration value calculation system according to claim 1, wherein the area acquisition unit acquires the plurality of concentration target areas by referring to the storage unit. - 前記複数の集中対象領域の各々は、前記対象者に対して当該集中対象領域に対応する集中対象物の一端及び他端を注視させる指示に対して、前記一端を注視する前記対象者の顔の向き又は視線の向きと、前記他端を注視する前記対象者の顔の向き又は視線の向きとの間の領域として決定されて、あらかじめ記憶部に記憶されている
請求項2に記載の集中値算出システム。 Each of the plurality of concentration target areas is configured such that, in response to an instruction to cause the subject to gaze at one end and the other end of the concentration target corresponding to the concentration target area, the face of the subject gazes at the one end. 3. The concentration value according to claim 2, which is determined as an area between the orientation or the direction of the line of sight and the orientation of the face or the direction of the line of sight of the subject gazing at the other end, and stored in advance in a storage unit. calculation system. - 前記集中対象領域は、前記対象者が使用するスペースごとにあらかじめ設定されて、記憶部に記憶されており、
前記領域取得部は、前記記憶部を参照して、前記複数の集中対象領域を取得する
請求項1に記載の集中値算出システム。 The concentration target area is set in advance for each space used by the target person and stored in a storage unit,
The concentration value calculation system according to claim 1, wherein the area acquisition unit acquires the plurality of concentration target areas by referring to the storage unit. - 前記集中対象領域は、前記対象者ごとにあらかじめ設定されて、記憶部に記憶されており、
前記領域取得部は、前記記憶部を参照して、前記複数の集中対象領域を取得する
請求項1に記載の集中値算出システム。 The concentration target area is set in advance for each target person and stored in a storage unit,
The concentration value calculation system according to claim 1, wherein the area acquisition unit acquires the plurality of concentration target areas by referring to the storage unit. - 前記対象者の顔の向きは、取得された前記画像ストリームから、前記対象者の顔の法線ベクトルとして算出され、
前記複数の集中対象領域のそれぞれは、前記対象者の位置を中心とした所定の角度範囲を有し、
前記集中値算出部は、算出された前記法線ベクトルが、前記所定の角度範囲内にある場合に、前記対象者が当該所定の角度範囲を有する集中対象領域に対応する集中対象物を注視している前記注視状態と判定する
請求項1~5のいずれか1項に記載の集中値算出システム。 the orientation of the subject's face is calculated from the acquired image stream as a normal vector of the subject's face;
each of the plurality of focused target areas has a predetermined angular range centered on the subject's position;
The concentration value calculation unit, when the calculated normal vector is within the predetermined angle range, causes the subject to gaze at the concentration target object corresponding to the concentration target region having the predetermined angle range. 6. The concentration value calculation system according to any one of claims 1 to 5, wherein the gaze state is determined to be one of the - 前記集中値算出部は、前記注視状態でないと判定した場合に、さらに、前記対象者が、前記複数の集中対象物のうちの2つの間を視線が推移する推移状態であるか否かを判定し、判定結果に基づいて前記対象者の集中値を算出する
請求項1~6のいずれか1項に記載の集中値算出システム。 The concentration value calculation unit further determines whether or not the target person is in a transition state in which the line of sight transitions between two of the plurality of concentration objects when determining that the gaze state is not the state. and calculating the concentration value of the subject based on the determination result. - 前記集中値算出部は、
さらに、取得された前記画像ストリームから、前記対象者の単位集中値であるパフォーマンス値を算出し、
前記対象者が前記注視状態であると判定された場合には、算出した前記パフォーマンス値に第1係数を乗じ、前記対象者が前記推移状態であると判定された場合には、算出した前記パフォーマンス値に前記第1係数以下の第2係数を乗じ、前記対象者が前記注視状態でないと判定され、かつ、前記推移状態でないと判定された場合には、算出した前記パフォーマンス値に前記第2係数よりも小さい第3係数を乗じて前記対象者の集中値を算出する
請求項7に記載の集中値算出システム。 The concentration value calculation unit
Further, calculating a performance value, which is a unit concentration value of the subject, from the acquired image stream,
When the target person is determined to be in the gaze state, the calculated performance value is multiplied by a first coefficient, and when the target person is determined to be in the transition state, the calculated performance value is multiplied by a second coefficient equal to or less than the first coefficient, and when it is determined that the target person is not in the gaze state and is not in the transition state, the calculated performance value is added to the second coefficient 8. The concentration value calculation system according to claim 7, wherein the concentration value of the subject is calculated by multiplying by a third coefficient smaller than . - 前記集中値算出部は、
さらに、取得された前記画像ストリームから、前記対象者の単位集中値であるパフォーマンス値を、第1時点及び前記第1時点に連続する第2時点で算出し、
前記対象者が前記注視状態であると判定された場合、及び、前記対象者が前記注視状態でないと判定され、かつ、前記推移状態でないと判定された場合には、算出した前記第1時点におけるパフォーマンス値に第1重み係数を乗じた第1値、及び、算出した前記第2時点におけるパフォーマンス値に第2重み係数であって前記第1重み係数と1との差分である第2重み係数を乗じた第2値を加算し、
前記対象者が前記推移状態であると判定された場合には、算出した前記第1時点におけるパフォーマンス値に前記第1重み係数とは異なる第3重み係数を乗じた第3値、及び、算出した前記第2時点におけるパフォーマンス値に第4重み係数であって前記第3重み係数と1との差分である第4重み係数を乗じた第4値を加算して前記対象者の集中値を算出し、
前記第1重み係数、及び、前記第3重み係数は、0より大きく1より小さい数値である
請求項7に記載の集中値算出システム。 The concentration value calculation unit
Furthermore, from the acquired image stream, a performance value, which is a unit concentration value of the subject, is calculated at a first time point and a second time point following the first time point,
When it is determined that the target person is in the gaze state, and when it is determined that the target person is not in the gaze state and is not in the transitional state, A first value obtained by multiplying the performance value by a first weighting factor, and a second weighting factor, which is a difference between the first weighting factor and 1, to the calculated performance value at the second point in time. Add the multiplied second value,
When the subject is determined to be in the transition state, a third value obtained by multiplying the calculated performance value at the first time point by a third weighting factor different from the first weighting factor, and the calculated adding a fourth value obtained by multiplying the performance value at the second time point by a fourth weighting factor which is a difference between the third weighting factor and 1 to calculate the concentration value of the subject; ,
8. The concentration value calculation system according to claim 7, wherein the first weighting factor and the third weighting factor are numerical values greater than 0 and less than 1. - 前記対象者の顔の向き又は視線の向きは、前記画像ストリームと、前記画像ストリームを撮像する撮像装置に対する前記対象者の顔の向き又は視線の向きとの相関関係が機械学習によって学習された向き算出モデルに対して前記画像ストリームを入力することで、出力結果として算出される
請求項1~9のいずれか1項に記載の集中値算出システム。 The direction of the subject's face or the direction of the line of sight is the direction in which the correlation between the image stream and the direction of the subject's face or the direction of the line of sight with respect to an imaging device that captures the image stream is learned by machine learning. The concentration value calculation system according to any one of claims 1 to 9, which is calculated as an output result by inputting the image stream to a calculation model. - 対象者が撮像された画像ストリームを取得し、
複数の集中対象領域であって、前記複数の集中対象領域の各々が、前記対象者が注視すべき複数の集中対象物の各々に対応する複数の集中対象領域を取得し、
取得された前記画像ストリームから、前記対象者の顔の向き又は視線の向きを算出し、
算出した前記対象者の顔の向き又は視線の向きと、取得された前記複数の集中対象領域とに基づいて、前記対象者が前記複数の集中対象物のいずれかを注視している注視状態であるか否かを判定し、
判定結果に基づいて前記対象者の集中値を算出して出力する
集中値算出方法。 obtaining an image stream in which the subject is imaged;
obtaining a plurality of focused regions of interest, each of the plurality of focused regions of interest corresponding to each of a plurality of focused objects to be gazed at by the subject;
calculating the orientation of the face or the orientation of the line of sight of the subject from the acquired image stream;
a gaze state in which the subject is gazing at any one of the plurality of focused objects based on the calculated face orientation or line-of-sight orientation of the subject and the plurality of acquired focused target areas; determine whether there is
A concentration value calculation method for calculating and outputting a concentration value of the subject based on a determination result. - 請求項11に記載の集中値算出方法をコンピュータに実行させるための
プログラム。 A program for causing a computer to execute the concentration value calculation method according to claim 11 . - 対象者が撮像された画像ストリームを取得する画像取得部と、
複数の集中対象領域であって、前記複数の集中対象領域の各々が、前記対象者が注視すべき複数の集中対象物の各々に対応する、複数の集中対象領域を取得する領域取得部と、
前記対象者の集中値を算出する集中値算出部と、を備え、
前記集中値算出部は、前記画像ストリーム及び前記複数の集中対象領域と、前記対象者の集中値との相関関係が機械学習によって学習された集中値算出モデルに対して前記画像ストリーム及び前記複数の集中対象領域を入力することで、前記対象者の集中値を算出する
集中値算出システム。 an image acquisition unit that acquires an image stream in which a subject is imaged;
a region acquisition unit configured to acquire a plurality of concentration target regions, each of which corresponds to each of a plurality of concentration targets to be gazed at by the subject;
a concentration value calculation unit that calculates the concentration value of the subject,
The concentration value calculation unit is configured to apply the image stream and the plurality of concentration target regions to a concentration value calculation model in which a correlation between the image stream and the plurality of concentration target regions and the concentration value of the subject has been learned by machine learning. A concentration value calculation system for calculating a concentration value of the subject by inputting a concentration target region. - 請求項13に記載の集中値算出モデルを生成するモデル生成部を備える、
集中値算出モデル生成システム。 A model generation unit that generates the concentration value calculation model according to claim 13,
Concentrated value calculation model generation system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023510917A JP7531148B2 (en) | 2021-03-30 | 2022-03-16 | Concentration value calculation system, concentration value calculation method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021057345 | 2021-03-30 | ||
JP2021-057345 | 2021-03-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022209912A1 true WO2022209912A1 (en) | 2022-10-06 |
Family
ID=83459109
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/012007 WO2022209912A1 (en) | 2021-03-30 | 2022-03-16 | Concentration value calculation system, concentration value calculation method, program, and concentration value calculation model generation system |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP7531148B2 (en) |
WO (1) | WO2022209912A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000259834A (en) * | 1999-03-11 | 2000-09-22 | Toshiba Corp | Registering device and method for person recognizer |
JP2016111612A (en) * | 2014-12-09 | 2016-06-20 | 三星電子株式会社Samsung Electronics Co.,Ltd. | Content display device |
JP2017140107A (en) * | 2016-02-08 | 2017-08-17 | Kddi株式会社 | Concentration degree estimation device |
WO2020116181A1 (en) * | 2018-12-03 | 2020-06-11 | パナソニックIpマネジメント株式会社 | Concentration degree measurement device and concentration degree measurement method |
-
2022
- 2022-03-16 WO PCT/JP2022/012007 patent/WO2022209912A1/en active Application Filing
- 2022-03-16 JP JP2023510917A patent/JP7531148B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000259834A (en) * | 1999-03-11 | 2000-09-22 | Toshiba Corp | Registering device and method for person recognizer |
JP2016111612A (en) * | 2014-12-09 | 2016-06-20 | 三星電子株式会社Samsung Electronics Co.,Ltd. | Content display device |
JP2017140107A (en) * | 2016-02-08 | 2017-08-17 | Kddi株式会社 | Concentration degree estimation device |
WO2020116181A1 (en) * | 2018-12-03 | 2020-06-11 | パナソニックIpマネジメント株式会社 | Concentration degree measurement device and concentration degree measurement method |
Also Published As
Publication number | Publication date |
---|---|
JP7531148B2 (en) | 2024-08-09 |
JPWO2022209912A1 (en) | 2022-10-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Arabadzhiyska et al. | Saccade landing position prediction for gaze-contingent rendering | |
Lemaignan et al. | From real-time attention assessment to “with-me-ness” in human-robot interaction | |
US10665022B2 (en) | Augmented reality display system for overlaying apparel and fitness information | |
JP6165846B2 (en) | Selective enhancement of parts of the display based on eye tracking | |
US10607063B2 (en) | Information processing system, information processing method, and recording medium for evaluating a target based on observers | |
US10832483B2 (en) | Apparatus and method of monitoring VR sickness prediction model for virtual reality content | |
Cidota et al. | Workspace awareness in collaborative AR using HMDS: a user study comparing audio and visual notifications | |
US11442685B2 (en) | Remote interaction via bi-directional mixed-reality telepresence | |
JP4868360B2 (en) | Interest trend information output device, interest trend information output method, and program | |
KR20210043174A (en) | Method for providing exercise coaching function and electronic device performing thereof | |
WO2013069344A1 (en) | Gaze position estimation system, control method for gaze position estimation system, gaze position estimation device, control method for gaze position estimation device, program, and information storage medium | |
KR20190048144A (en) | Augmented reality system for presentation and interview training | |
WO2022209912A1 (en) | Concentration value calculation system, concentration value calculation method, program, and concentration value calculation model generation system | |
JP2016111612A (en) | Content display device | |
CN115762772B (en) | Method, device, equipment and storage medium for determining emotional characteristics of target object | |
Lee et al. | A study on virtual reality sickness and visual attention | |
Lin et al. | An eye-tracking and head-control system using movement increment-coordinate method | |
JP7233631B1 (en) | posture improvement system | |
JP2008046802A (en) | Interaction information output device, interaction information output method and program | |
CN111651043B (en) | Augmented reality system supporting customized multi-channel interaction | |
WO2022070747A1 (en) | Assist system, assist method, and assist program | |
Kao et al. | Eye gaze tracking based on pattern voting scheme for mobile device | |
TWI674518B (en) | Calibration method of eye-tracking and device thereof | |
Boczon | State of the art: eye tracking technology and applications | |
US20140062997A1 (en) | Proportional visual response to a relative motion of a cephalic member of a human subject |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22780132 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023510917 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11202305456W Country of ref document: SG |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22780132 Country of ref document: EP Kind code of ref document: A1 |