CN113115086A - Method for collecting elevator media viewing information based on video sight line identification - Google Patents

Method for collecting elevator media viewing information based on video sight line identification Download PDF

Info

Publication number
CN113115086A
CN113115086A CN202110413780.2A CN202110413780A CN113115086A CN 113115086 A CN113115086 A CN 113115086A CN 202110413780 A CN202110413780 A CN 202110413780A CN 113115086 A CN113115086 A CN 113115086A
Authority
CN
China
Prior art keywords
sight
media
video
line
viewing information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110413780.2A
Other languages
Chinese (zh)
Other versions
CN113115086B (en
Inventor
王志鹏
安乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shanlian Technology Co ltd
Zhejiang Institute of Special Equipment Science
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110413780.2A priority Critical patent/CN113115086B/en
Publication of CN113115086A publication Critical patent/CN113115086A/en
Application granted granted Critical
Publication of CN113115086B publication Critical patent/CN113115086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data

Abstract

The invention discloses a method for collecting elevator media viewing information based on video sight recognition. Video monitoring equipment arranged on an elevator door is utilized to collect video images, and the video images are transmitted to intelligent equipment to carry out sight line characteristic identification; performing space calculation by using the sight line characteristic parameters and the installation parameters of the monitoring equipment, and filtering invalid sight line characteristics by using the calculated horizontal distance; and performing space collision calculation on the visual areas by using the effective sight characteristic parameters and the installation parameters of the media playing equipment, counting the sight attention amount of the currently played content in each visual area, and controlling the media playing equipment to realize the on-demand of the specific media content according to the sight attention amount of the currently played content.

Description

Method for collecting elevator media viewing information based on video sight line identification
Technical Field
The invention relates to the field of image recognition, the field of video media playing control and the field of statistics, in particular to a method for collecting elevator media viewing information based on video sight recognition.
Background
The elevator media advertisement has been increasingly applied in daily social life, and the playing effect of the advertisement media is effectively amplified due to the fact that space in the elevator is small and people flow is large. However, the traditional elevator media advertisement playing equipment is not provided with sensing and input equipment, the playing equipment can play set advertisement contents in a circulating mode all the time no matter whether people exist in the elevator car or not, and the playing equipment can play the advertisement contents according to the set advertisement content sequence fixedly no matter what people exist in the elevator car. This causes resource waste and the problem of inaccurate advertisement delivery, and the rights and interests of the advertisers are not guaranteed, and also causes great waste of social resources. The advertiser can only carry out advertisement putting blindly, and how the advertisement effect is, what advertisement is concerned by the crowd, what content in the advertisement is concerned by the crowd, and nothing is obtained. Moreover, passengers in the elevator can only passively accept the played media content regardless of liking or disliking, and have no independent option, so that the elevator riding experience is influenced to a certain extent, and the bad experience of the passengers also causes great negative influence on the effect of the media advertisement.
Disclosure of Invention
The invention aims to provide a method for collecting elevator media viewing information based on video sight line identification, which utilizes a video monitoring image identification technology to realize intelligent perception so that the attention points of passengers to media contents can be collected as viewing information in the media playing process, thus enhancing the perception capability of advertisers on passenger information in various aspects such as advertisement acceptance, attention and the like; the intelligent perception is realized by utilizing the video monitoring image recognition technology, so that passengers have independent option on media playing contents, resources are saved, and user experience and advertisement playing effect are enhanced.
The purpose of the invention is realized by the following technical scheme: a method for collecting elevator media viewing information based on video line of sight recognition, the method comprising: video image acquisition is carried out by utilizing video monitoring equipment arranged on an elevator door, and the video image is transmitted to intelligent equipment arranged in the elevator for sight line characteristic identification; performing space calculation by using the sight line characteristic parameters and the installation parameters of the monitoring equipment, and filtering invalid sight line characteristics by using the calculated horizontal distance; and performing visual area space collision calculation by using the effective sight characteristic parameters and the media playing equipment installation parameters, and counting the sight attention amount of the currently played content in each visual area.
Furthermore, the video monitoring equipment installed on the elevator door can be installed on a landing door or a car door, the front area of the elevator door is shot, people waiting outside the elevator door and/or people riding in the elevator car door can be effectively shot, and the front head-up face image can be shot, so that sight line characteristic recognition is facilitated.
Further, the installation parameters of the monitoring equipment comprise the height of a camera lens, the horizontal position of the lens, the pixel resolution of the lens, the visual horizontal visual angle and the vertical visual angle of the lens and the like.
Furthermore, sight line feature recognition is carried out on the video image, the existing mature sight line offline recognition AI product can be adopted, and the number of the portrait, the rectangular position size, the age, the gender, the binocular sight line space coordinate, the binocular sight line space vector and other features of each portrait can be recognized.
Further, the filtering of the invalid sight line feature by using the calculated horizontal distance specifically includes: the real height corresponding to the current portrait can be found out reversely by utilizing the set human face average height table corresponding to different portrait characteristic parameters, and the horizontal distance between the spatial position of the current portrait and the installation position of the camera is estimated by combining the pixel position and the size of the current portrait in the image, so that invalid portraits smaller than the credible distance or larger than the credible distance are eliminated.
Further, the media playing device installation parameters include coordinates of the upper left corner of the media screen in a three-dimensional space when the camera lens of the video monitoring device is taken as an origin of the three-dimensional space, and information such as specific numerical values of the width and height of the screen, the resolution of the screen and the like.
Further, the visual area space collision calculation is to perform space ray extension on the identified portrait sight space vector, perform intersection calculation with the space position of the media playing device, and if the ray passes through the surface of the visual area in the space position, convert the space coordinate of the point where the ray intersects with the surface into the plane coordinate in the surface and record the plane coordinate. Specifically, the method comprises the following steps: calculating the spatial coordinates of the sight lines of the two eyes by taking the installation position of the camera as the spatial origin coordinates and utilizing the calculated horizontal distance of the portrait; calculating the space coordinates of each visual area of the currently played media content on the screen by taking the installation position of the camera as the space origin coordinates and utilizing the installation parameters of the media playing equipment; and calculating the intersection coordinates of the vector extension ray and the visual area space point plane by using the binocular vision space vector, converting the intersection space coordinates into the screen plane coordinates of the media playing equipment, and collecting the screen plane coordinates, the currently played video media content and the time axis data as media viewing information.
Further, controlling the media playing device to play the specific media content according to the attention amount of the currently played content sight line, specifically: making a corresponding relation table of an image area and media content to be played for the current playing content; counting the sight attention amount of different image areas in the current sight feature recognition result; selecting the image area with the top ranking based on the maximum or minimum ranking result; and finding the media content to be played corresponding to the image area through table lookup, and controlling the media playing equipment to play the specific media content.
The invention has the beneficial effects that: the invention provides a method for collecting elevator media audience information based on video sight recognition, which describes the mounting position of video monitoring equipment and the definition of shooting parameters, describes the mounting position of a media playing screen and the definition of parameters, describes the definition of human faces and sight features collected based on image recognition, describes a method for removing invalid sight feature information by utilizing the position of the human faces in a three-dimensional space, calculates the spatial position based on the collected sight features, performs point-plane spatial intersection calculation with the spatial position of the media playing screen, converts the calculation result into plane coordinates of the media screen, describes a method for counting the sight attention of the visual area of the media screen and obtaining the media content to be played by utilizing the attention sorting and table lookup; the method can realize accurate elevator media viewing information acquisition; the method can also realize accurate elevator media content on demand.
Drawings
FIG. 1 is a schematic diagram of installation position parameters of a video monitoring device and a media playing screen;
FIG. 2 is a schematic view of video surveillance shooting parameters;
FIG. 3 is a view of line-of-sight identification feature data;
FIG. 4 is a schematic diagram of the estimation of the three-dimensional spatial position of a human face;
FIG. 5 is a schematic diagram of the calculation of the intersection space of the sight line and the screen plane;
FIG. 6 is a flow diagram illustrating a process for controlling media content playback based on gaze characteristics;
FIG. 7 mean portrait true height XML data organization;
fig. 8 shows the XML data organization form of the screen attention area table corresponding to different media files to be played.
Detailed Description
For better understanding of the technical solutions of the present application, the following detailed descriptions of the embodiments of the present application are provided with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The embodiment of the invention provides a method for collecting elevator media audience information based on video sight line identification, wherein in the running process of an elevator, video monitoring equipment arranged on a car door of an elevator car is used for carrying out video collection on the front surface of an area in the car, and in addition, video collection can be carried out on people waiting outside the elevator door through the video monitoring equipment arranged on a landing door; the collected video images are transmitted to intelligent equipment to carry out local off-line sight feature recognition, invalid sight features are recognized through space calculation, finally, space collision calculation is carried out on the valid sight features and media screen installation parameters, the regions of portrait sight falling on a media screen are counted, the distribution condition of sight attention heat on the screen is used as viewing information and fed back to an advertisement player, and media playing equipment can be controlled to realize on-demand of specific media contents according to the contents of different regions. The method mainly comprises the following steps, and the flow is shown in figure 6:
1. setting installation parameters of video monitoring equipment and media playing equipment
The video monitoring device installation parameters and the media playing device installation parameters to be set include a camera lens height dh, a horizontal distance hd between the upper left corner of the media screen and the camera, a vertical distance vd between the upper left corner of the media screen and the camera, as shown in fig. 1, and a horizontal visual angle range α and a vertical visual angle range β shot by the camera lens, as shown in fig. 2. The parameters of the shooting range of the lens can be obtained from the factory parameters of the camera lens, and the height of the camera lens and the distance between the camera lens and the installation position of the screen can be obtained by measurement when the camera lens is installed on site.
2. Image recognition using offline view recognition SDK
The embedded smart machine of installing in the car with the access of video acquisition camera utilizes ripe image sight discernment AI technique, uses the off-line SDK product that image recognition manufacturer provided, carries out sight discernment to the video image that video acquisition camera was shot, and the characteristic of discernment contains: the pixel position and size of the face matrix in the current image, the estimated gender of the portrait, the estimated age of the portrait, the left eye position coordinate, the right eye position coordinate, the left eye sight line vector, the right eye sight line vector and the like are shown in fig. 3.
3. Estimating portrait range camera horizontal position based on pixel scale
The size of the face image shot by the video acquisition camera in the picture is different from the front and back positions of the person in the elevator, as shown in fig. 4. And setting a section s of the visible area as a video image of the portrait at a position at a distance d, and setting h as the pixel height of the face matrix of the portrait at the position at the distance d. Given that the vertical viewing angle range of the video capture camera is β, the pixel resolution of the video capture image is width × HEIGHT (for example 1440 × 1080), and by looking up the table, as shown in fig. 7, the average VALUE FACE _ HEIGHT _ VALUE of the actual HEIGHT of the current image can be obtained and counted as fh. Therefore, the calculation formula of the distance position of the current portrait is as follows:
Figure BDA0003025060420000041
4. eliminating invalid portrait by effective distance range limitation
Since the image shot by the video capture camera may have interference data of invalid portrait, as shown in fig. 3, the portrait distance position d estimated by the calculation formula in step 3 may be used as a characteristic parameter for eliminating the invalid portrait. Setting the minimum credible portrait distance as mind, setting the maximum credible portrait distance as maxd, if d is less than mind or d is more than maxd, judging the portrait corresponding to the distance position as an invalid portrait, performing the operation on all the portraits in the current image, after eliminating the invalid portrait, keeping the portrait as an effective portrait, and performing point-plane intersection space calculation with the media screen by using the sight line data of the effective portrait.
5. Calculating spatial position by pixel coordinates on acquired image by both eyes
Given that step 3 calculates the horizontal position d of the current portrait from the camera, the horizontal viewing angle α and the vertical viewing angle β captured by the camera, the vertical installation height dh of the camera, the pixel resolution of the video captured image is width × height (example 1440 × 1080), and the coordinates (x, y) of the monocular in the video captured image with the upper left as the origin, the calculation formula of the height difference between the monocular and the camera in space is:
Figure BDA0003025060420000051
the horizontal position calculation formula of the monocular on the plane facing the camera in space by taking the camera mapping position as the center point is as follows:
Figure BDA0003025060420000052
then the monocular is in the space, uses the camera as the space origin, uses camera mounting plane to turn right from a left side towards the camera to be the x-axis, uses camera mounting plane to be the y-axis perpendicularly upwards, uses the camera to shoot the perpendicular ray of direction to be the z-axis, and its space coordinate expression is: (ed, eh, d).
6. Calculating a drop point on a media screen using a binocular spatial position and a sight vector
As shown in fig. 5, the coordinates of a known monocular in space with a camera as a space origin are: (ed, eh, d), then it maps to the plane of the camera, and the plane coordinates with the camera center as the plane origin are: (ed, eh), and the sight line vectors of the monocular on the x-axis and the y-axis of the space are known as follows: (pitch, yaw), then the horizontal deviation distance between the line of sight landing point of the monocular on the plane where the camera is located and the vertical landing point of the monocular on the plane where the camera is located is calculated by the formula:
sd=tan(yaw)×d
the vertical deviation distance calculation formula between the sight line falling point of the monocular on the plane where the camera is located and the vertical falling point of the monocular on the plane where the camera is located is as follows:
sh=tan(pitch)×d
the coordinates of the monocular vision point on the plane where the camera is located are as follows: (ed + sd, eh + sh).
Given that the horizontal and vertical distances between the center position of the camera and the upper left corner of the screen mounting position of the display device are hd and vd, the actual length and width dimensions s _ w and s _ h of the screen of the display device, and the screen resolutions s _ width and s _ height of the display device, the horizontal pixel coordinate calculation formula of the sight line drop point on the screen of the display device is as follows:
Figure BDA0003025060420000053
the vertical pixel coordinate calculation formula of the sight line falling point on the screen of the display device is as follows:
Figure BDA0003025060420000061
7. completing the extraction and storage of the media viewing information
And uniformly storing the currently played media content, the played time axis information and the coordinate information of the sight line falling point on the screen as media viewing information.
8. Looking up the table to find the statistical drop points in the visual area, sorting and playing the corresponding media content to be played
The XML data organization form of the screen attention area table corresponding to different media files to be played is shown in fig. 8, coordinates of each sight line at a screen drop point are compared with records in the screen attention area table corresponding to the media to be played, whether the current sight line drop point is in a screen attention area corresponding to a certain specific media to be played is counted, the current sight line drop point is determined to pay attention to the certain specific media to be played when the four conditions are all met:
the first condition is as follows: x > AREA _ ELFT _ VALUE
And a second condition: x < (AREA _ LEFT _ VALUE + AREA _ WIDTH _ VALUE)
And (3) carrying out a third condition: y > AREA _ TOP _ VALUE
And a fourth condition: y < (AREA _ TOP _ VALUE + AREA _ HEIGHT _ VALUE)
And finally, sequencing the number of the sight lines counted by all the media to be played, and selecting the media file to be played with the highest attention or the lowest attention as the content to be played next.
In one embodiment, a computer device is provided, which includes a memory and a processor, wherein the memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the processor executes the steps of the method for collecting elevator media viewing information based on video line-of-sight identification in the embodiments.
In one embodiment, a storage medium storing computer readable instructions is provided, which when executed by one or more processors, cause the one or more processors to perform the steps of the method for collecting elevator media viewing information based on video line of sight identification in the embodiments described above. The storage medium may be a nonvolatile storage medium.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.

Claims (9)

1. A method for collecting elevator media viewing information based on video line of sight recognition is characterized by comprising the following steps: video monitoring equipment arranged on an elevator door is utilized to collect video images, and the video images are transmitted to intelligent equipment to carry out sight line characteristic identification; performing space calculation by using the sight line characteristic parameters and the installation parameters of the monitoring equipment, and filtering invalid sight line characteristics by using the calculated horizontal distance; and performing visual area space collision calculation by using the effective sight characteristic parameters and the media playing equipment installation parameters, and counting the sight attention amount of the currently played content in each visual area.
2. The method for collecting elevator media viewing information based on video line-of-sight recognition as claimed in claim 1, wherein the video monitoring device installed on the elevator door shoots the front area of the elevator door, so as to effectively shoot the front head-up face image of people waiting outside the elevator door and/or people riding in the elevator car door, thereby facilitating line-of-sight feature recognition.
3. The method for collecting elevator media viewing information based on video line of sight identification as recited in claim 1, wherein the monitoring device installation parameters include camera lens height, lens horizontal position, lens pixel resolution, lens visible horizontal and vertical viewing angles, and the like.
4. The method for collecting elevator media viewing information based on video line of sight identification as claimed in claim 1, wherein the line of sight feature identification is performed on the video image to identify the number of portraits, the rectangular position size of each portraits, the age, the gender, the spatial coordinates of the line of sight of both eyes, the spatial vector of the line of sight of both eyes, and other features.
5. The method for collecting elevator media viewing information based on video line of sight identification as claimed in claim 1, wherein said filtering out invalid line of sight features using calculated horizontal distance is specifically: the real height corresponding to the current portrait can be found out reversely by utilizing the set human face average height table corresponding to different portrait characteristic parameters, and the horizontal distance between the spatial position of the current portrait and the installation position of the camera is estimated by combining the pixel position and the size of the current portrait in the image, so that invalid portraits smaller than the credible distance or larger than the credible distance are eliminated.
6. The method for collecting elevator media viewing information based on video line of sight identification as claimed in claim 1, wherein the media player installation parameters include length and width of the player visual area, resolution of the visual area, vertical height of equipment installation, horizontal position of equipment installation, etc.
7. The method for collecting elevator media viewing information based on video line-of-sight identification as claimed in claim 1, wherein the visual area spatial collision calculation is performed by performing spatial ray extension on the identified portrait line-of-sight spatial vector, performing intersection calculation with the spatial position of the media playing device, and if the ray passes through the plane of the visual area in the spatial position, converting the spatial coordinate of the point where the ray intersects the plane into the plane coordinate in the plane and recording the plane coordinate.
8. The method for collecting elevator media viewing information based on video line of sight identification as claimed in claim 7, wherein the calculated coordinates of the point where the line of sight intersects the plane of the viewing area in spatial position are converted to pixel coordinates of the screen of the media playing device and collected as viewing information together with the currently playing video media content and time axis data.
9. The method for collecting elevator media viewing information based on video line of sight recognition as claimed in claim 1, wherein controlling the media playing device to play the specific media content according to the currently played content line of sight attention, specifically: making a corresponding relation table of an image area and media content to be played for the current playing content; counting and sequencing the sight attention amounts of different image areas in the current sight feature recognition result; according to the sorting result, the media content to be played corresponding to the image area is obtained by table look-up, and the media playing device is controlled to play the specific media content.
CN202110413780.2A 2021-04-16 2021-04-16 Method for collecting elevator media viewing information based on video line-of-sight identification Active CN113115086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110413780.2A CN113115086B (en) 2021-04-16 2021-04-16 Method for collecting elevator media viewing information based on video line-of-sight identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110413780.2A CN113115086B (en) 2021-04-16 2021-04-16 Method for collecting elevator media viewing information based on video line-of-sight identification

Publications (2)

Publication Number Publication Date
CN113115086A true CN113115086A (en) 2021-07-13
CN113115086B CN113115086B (en) 2023-09-19

Family

ID=76718117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110413780.2A Active CN113115086B (en) 2021-04-16 2021-04-16 Method for collecting elevator media viewing information based on video line-of-sight identification

Country Status (1)

Country Link
CN (1) CN113115086B (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007006427A (en) * 2005-05-27 2007-01-11 Hitachi Ltd Video monitor
US20070285528A1 (en) * 2006-06-09 2007-12-13 Sony Corporation Imaging apparatus, control method of imaging apparatus, and computer program
US20100002072A1 (en) * 2008-07-02 2010-01-07 Sony Corporation Display apparatus and display method
JP2010204823A (en) * 2009-03-02 2010-09-16 Hitachi Information & Control Solutions Ltd Line-of-sight recognition device
JP2010232772A (en) * 2009-03-26 2010-10-14 Nomura Research Institute Ltd Content viewer analyzer, content viewer analyzing method, and program
US20120121126A1 (en) * 2010-11-17 2012-05-17 Samsung Electronics Co., Ltd. Method and apparatus for estimating face position in 3 dimensions
CN106022209A (en) * 2016-04-29 2016-10-12 杭州华橙网络科技有限公司 Distance estimation and processing method based on face detection and device based on face detection
CN108921125A (en) * 2018-07-18 2018-11-30 广东小天才科技有限公司 A kind of sitting posture detecting method and wearable device
CN108989888A (en) * 2018-07-18 2018-12-11 揭阳市聆讯软件有限公司 Video content playback method, device, smart machine and storage medium
CN109186584A (en) * 2018-07-18 2019-01-11 浙江臻万科技有限公司 A kind of indoor orientation method and positioning system based on recognition of face
CN109902630A (en) * 2019-03-01 2019-06-18 上海像我信息科技有限公司 A kind of attention judgment method, device, system, equipment and storage medium
WO2019114955A1 (en) * 2017-12-13 2019-06-20 Telefonaktiebolaget Lm Ericsson (Publ) Detecting user attention in immersive video
CN109993033A (en) * 2017-12-29 2019-07-09 中国移动通信集团四川有限公司 Method, system, server, equipment and the medium of video monitoring
CN111046744A (en) * 2019-11-21 2020-04-21 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN111046810A (en) * 2019-12-17 2020-04-21 联想(北京)有限公司 Data processing method and processing device
CN111639702A (en) * 2020-05-29 2020-09-08 深圳壹账通智能科技有限公司 Multimedia data analysis method, equipment, server and readable storage medium
WO2020186867A1 (en) * 2019-03-18 2020-09-24 北京市商汤科技开发有限公司 Method and apparatus for detecting gaze area and electronic device
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112257507A (en) * 2020-09-22 2021-01-22 恒鸿达信息技术有限公司 Method and device for judging distance and human face validity based on human face interpupillary distance
CN112560615A (en) * 2020-12-07 2021-03-26 上海明略人工智能(集团)有限公司 Method and system for judging viewing screen and electronic equipment

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007006427A (en) * 2005-05-27 2007-01-11 Hitachi Ltd Video monitor
US20070285528A1 (en) * 2006-06-09 2007-12-13 Sony Corporation Imaging apparatus, control method of imaging apparatus, and computer program
US20100002072A1 (en) * 2008-07-02 2010-01-07 Sony Corporation Display apparatus and display method
JP2010204823A (en) * 2009-03-02 2010-09-16 Hitachi Information & Control Solutions Ltd Line-of-sight recognition device
JP2010232772A (en) * 2009-03-26 2010-10-14 Nomura Research Institute Ltd Content viewer analyzer, content viewer analyzing method, and program
US20120121126A1 (en) * 2010-11-17 2012-05-17 Samsung Electronics Co., Ltd. Method and apparatus for estimating face position in 3 dimensions
CN106022209A (en) * 2016-04-29 2016-10-12 杭州华橙网络科技有限公司 Distance estimation and processing method based on face detection and device based on face detection
WO2019114955A1 (en) * 2017-12-13 2019-06-20 Telefonaktiebolaget Lm Ericsson (Publ) Detecting user attention in immersive video
CN109993033A (en) * 2017-12-29 2019-07-09 中国移动通信集团四川有限公司 Method, system, server, equipment and the medium of video monitoring
CN108921125A (en) * 2018-07-18 2018-11-30 广东小天才科技有限公司 A kind of sitting posture detecting method and wearable device
CN108989888A (en) * 2018-07-18 2018-12-11 揭阳市聆讯软件有限公司 Video content playback method, device, smart machine and storage medium
CN109186584A (en) * 2018-07-18 2019-01-11 浙江臻万科技有限公司 A kind of indoor orientation method and positioning system based on recognition of face
CN109902630A (en) * 2019-03-01 2019-06-18 上海像我信息科技有限公司 A kind of attention judgment method, device, system, equipment and storage medium
WO2020186867A1 (en) * 2019-03-18 2020-09-24 北京市商汤科技开发有限公司 Method and apparatus for detecting gaze area and electronic device
CN111046744A (en) * 2019-11-21 2020-04-21 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN111046810A (en) * 2019-12-17 2020-04-21 联想(北京)有限公司 Data processing method and processing device
CN111639702A (en) * 2020-05-29 2020-09-08 深圳壹账通智能科技有限公司 Multimedia data analysis method, equipment, server and readable storage medium
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112257507A (en) * 2020-09-22 2021-01-22 恒鸿达信息技术有限公司 Method and device for judging distance and human face validity based on human face interpupillary distance
CN112560615A (en) * 2020-12-07 2021-03-26 上海明略人工智能(集团)有限公司 Method and system for judging viewing screen and electronic equipment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ADEL LABLACK等: "Visual gaze projection in front of a target scene", 2009 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO *
周晓凌;: "视频监控让户外媒体广告效果看得见", 通信企业管理, no. 05 *
张闯;迟健男;邱亚飞;张朝晖;: "视线追踪系统中特征参数提取方法研究", 中国图象图形学报, no. 09 *
苏海明;侯振杰;梁久祯;许艳;李兴;: "使用人眼几何特征的视线追踪方法", 中国图象图形学报, no. 06 *
金纯;李娅萍;高奇;曾伟;: "视线追踪系统中注视点估计算法研究", 科学技术与工程, no. 14 *

Also Published As

Publication number Publication date
CN113115086B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
US8976229B2 (en) Analysis of 3D video
US8903123B2 (en) Image processing device and image processing method for processing an image
CN101588443A (en) Statistical device and detection method for television audience ratings based on human face
US20130251197A1 (en) Method and a device for objects counting
CN111353461B (en) Attention detection method, device and system of advertising screen and storage medium
KR102253989B1 (en) object tracking method for CCTV video by use of Deep Learning object detector
KR100560464B1 (en) Multi-view display system with viewpoint adaptation
US8169501B2 (en) Output apparatus, output method and program
CN111652900B (en) Method, system and equipment for counting passenger flow based on scene flow and storage medium
US20200302155A1 (en) Face detection and recognition method using light field camera system
CN105469054B (en) The model building method of normal behaviour and the detection method of abnormal behaviour
US20120257816A1 (en) Analysis of 3d video
CN104185012B (en) 3 D video form automatic testing method and device
CN108010058A (en) A kind of method and system that vision tracking is carried out to destination object in video flowing
JP4735242B2 (en) Gaze target object identification device
CN111767820A (en) Method, device, equipment and storage medium for identifying object concerned
CN113610865B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111738241B (en) Pupil detection method and device based on double cameras
CN113115086B (en) Method for collecting elevator media viewing information based on video line-of-sight identification
US20200074612A1 (en) Image analysis apparatus, image analysis method, and recording medium
CN201467351U (en) Television audience rating statistical device based on human face detection
CN112818168B (en) Method for controlling elevator media playing sequence based on video face recognition
CN112114659A (en) Method and system for determining a fine point of regard for a user
CN111860261A (en) Passenger flow value statistical method, device, equipment and medium
CN111708907A (en) Target person query method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wang Zhipeng

Inventor after: Wu Kang

Inventor after: Ying Zheng

Inventor before: Wang Zhipeng

Inventor before: An Le

CB03 Change of inventor or designer information
TA01 Transfer of patent application right

Effective date of registration: 20220610

Address after: No. 2, zhujiadou, group 3, Youyi Community, Wuchang Street, Yuhang District, Hangzhou, Zhejiang 311100

Applicant after: Wang Zhipeng

Applicant after: An Le

Applicant after: Zhejiang Institute of Special Equipment Science

Address before: 311200 room 1104, building 3, lantingyuan, Orlando Town, Chengxiang street, Xiaoshan District, Hangzhou City, Zhejiang Province

Applicant before: An Le

Applicant before: Wang Zhipeng

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230710

Address after: Room B3071, Floor 3, Building 1 (North), No. 368, Liuhe Road, Puyan Street, Binjiang District, Hangzhou, Zhejiang, 310000

Applicant after: Zhejiang Shanlian Technology Co.,Ltd.

Applicant after: Zhejiang Institute of Special Equipment Science

Address before: No. 2, zhujiadou, group 3, Youyi Community, Wuchang Street, Yuhang District, Hangzhou, Zhejiang 311100

Applicant before: Wang Zhipeng

Applicant before: An Le

Applicant before: Zhejiang Institute of Special Equipment Science

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant