JP2009139857A - Contents display control device, contents display control method, and contents display control program - Google Patents

Contents display control device, contents display control method, and contents display control program Download PDF

Info

Publication number
JP2009139857A
JP2009139857A JP2007318778A JP2007318778A JP2009139857A JP 2009139857 A JP2009139857 A JP 2009139857A JP 2007318778 A JP2007318778 A JP 2007318778A JP 2007318778 A JP2007318778 A JP 2007318778A JP 2009139857 A JP2009139857 A JP 2009139857A
Authority
JP
Japan
Prior art keywords
attribute information
content
display screen
image
passers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2007318778A
Other languages
Japanese (ja)
Inventor
Mitsuhiro Kurosaki
Ko Mishima
Toshihiro Mochizuki
Takehiro Sekine
巧 三嶋
敏弘 望月
剛宏 関根
光洋 黒崎
Original Assignee
Unicast Corp
ユニキャスト株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unicast Corp, ユニキャスト株式会社 filed Critical Unicast Corp
Priority to JP2007318778A priority Critical patent/JP2009139857A/en
Publication of JP2009139857A publication Critical patent/JP2009139857A/en
Pending legal-status Critical Current

Links

Images

Abstract

Kind Code: A1 Content selected according to the attributes of a large number of passersby is selected and displayed so as to increase the effect of information transmission when viewed macroscopically.
(1) A first camera to a third camera transmit images of an unspecified number of passersby to a content display control device. (2) The content display control apparatus acquires the “personal attribute information” of each of the passers-by A to F. (3) The content display control device determines the macro attribute of the recognized passerby as a whole based on the “personal attribute information” of each of the passers-by A to F. (4) The content display control apparatus selects content whose target matches the “macro attribute information” of passers-by A to F, and controls display on the display screen. (5) On the display screen, content that matches the “macro attribute information” of passers-by A to F is displayed on the display screen. In this way, it is possible to view the whole unspecified number of passersby in a macro manner and to further enhance the content display effect.
[Selection] Figure 1

Description

  The present invention controls display of content suitable for a passer-by on the display screen when the passer-by passes through the front of a display screen of a display device arranged at a position that can be viewed by an unspecified number of passers-by. The content display control device, the content display control method, and the content display control program for selecting and displaying the content according to the attributes of the entire many passers-by so as to increase the information transmission effect in a macro manner The present invention relates to a content display control device, a content display control method, and a content display control program that can be executed.

  Conventionally, large advertising billboards have been installed at easy-to-see locations in busy places, whether indoors or outdoors, for the purpose of corporate product promotion. When passers-by sees advertisements posted on advertising billboards, it can be expected to promote sales of corporate products.

  However, since the advertisement posted on the advertisement signboard is the same advertisement unless the advertisement is replaced, the attributes of passers-by who pass near the installation site are investigated and displayed in accordance with the attributes before posting the advertisement. If the advertisement is not posted, the advertisement will not be effective.

  Therefore, for example, as disclosed in Patent Document 1 and Patent Document 2, information provided to a passerby is displayed on an electronic display that can be switched and displayed, and the attribute of the passerby is captured by a camera or the like. An information providing apparatus is disclosed that acquires information on the basis of a captured image, selects information according to the attribute of the passerby, and displays the selected information on an electronic display.

Japanese Patent Laid-Open No. 2003-271084 Japanese Patent Laid-Open No. 2006-235311

  However, in the conventional techniques represented by Patent Document 1 and Patent Document 2 described above, when there are few passers-by, the passers-by can be displayed by selecting information corresponding to the attribute of each passer-by and displaying it on the electronic display. However, in the case of a large number of passers-by, each time a passer-by is photographed by a camera or the like, the attribute of the passer-by is determined based on the photographed image. If information obtained according to the attributes of the passers-by is selected and displayed on the electronic display, the display of information may be frequently switched if there are variations in the attributes of many passers-by .

  Here, in general, when information is transmitted to a person by display, it is important to display the information for a certain time or more in order for the person to visually recognize the information. In particular, when the information to be transmitted is an advertisement, the effect of the advertisement cannot be expected at all if the display is frequently switched.

  Furthermore, when a large number of passersby are photographed at the same time with a camera or the like, the attributes of the acquired passersby vary, so the attributes of the passers who have acquired the attributes at the same time depend on the attributes of the passersby The problem of selecting and displaying information occurs. For example, if information corresponding to the attributes of only a particular passerby among many passers-by is selected and displayed on the electronic display, the information transmission effect is sufficient if the overall number of passers-by is seen. In some cases, it cannot be demonstrated.

  The present invention has been made to solve the above problems (problems), and selects content corresponding to the attributes of the entire number of passers-by so as to increase the effect of information transmission in a macro view. It is an object of the present invention to provide a content display control device, a content display control method, and a content display control program that can be displayed.

  In order to solve the above-described problem and achieve the object, the present invention is directed to a case where a passerby passes through the front of a display screen of a display device arranged at a position that can be viewed by an unspecified number of passersby. A content display control device for controlling display of content suitable for a passerby on the display screen, and a personal attribute information acquisition unit for acquiring individual attribute information of each of a plurality of passers passing through the front of the display screen; Macro attribute information determination means for determining macro attribute information of the plurality of passers-by from the individual attribute information of the plurality of passers-by acquired by the personal attribute information acquisition means, and a display target on the display screen Content storage means for storing a plurality of contents in association with the macro attribute information, and macro attribute information determined by the macro attribute information determination means. It said content storage device searched by the, and having a content selection means for selecting content to be displayed on the display screen Te.

  Further, in the above invention, the present invention further includes personal attribute information storage means for storing the individual attribute information of each of the plurality of passers acquired by the personal attribute information acquisition means for each passer-by, and the macro The attribute information determining means determines macro attribute information of all passersby from the personal attribute information of all passersby stored for a predetermined time by the personal attribute information storage means.

  Further, the present invention is the above invention, wherein the personal attribute information acquisition means detects the face of each of the plurality of passersby, the attribute for each face, the size for each face, and the display for each face The personal attribute information includes a direction to the screen and a time when the direction to the display screen for each face is detected at a predetermined angle or more.

  Further, in the present invention according to the present invention, in the personal attribute information for each passerby stored for the predetermined time by the personal attribute information storage unit, the direction of the face toward the display screen is equal to or greater than a predetermined angle. Display screen visualizing passable number calculating means for calculating the display screen visualizing passable number of people who can be considered to have been viewing the display screen, and the display screen visualizing passable number calculating means, A content visibility ratio calculating means for calculating a content visibility ratio obtained by dividing the personal attribute information by the total number of passers-by who have stored the personal attribute information for the predetermined time in the personal attribute information storage means, and the content selection means, Further based on at least one of the display screen viewing passers, the total number of passers, or the content visibility rate And selects the content to be displayed on the serial display screen.

  Further, the present invention is the above invention, wherein the display condition storage means for storing a predetermined condition for controlling display of the content on the display screen or a content display condition including a combination of the predetermined conditions, and the display condition storage means are stored. A condition satisfaction determination unit that determines whether or not the content display condition is satisfied, and the content selection unit is configured to determine that the content display condition is satisfied by the condition satisfaction determination unit. The content to be displayed on the display screen is selected using the macro attribute information determined by the macro attribute information determining means.

  Further, the present invention is the above invention, wherein the personal attribute information acquisition unit is a passer image acquisition unit that acquires images of the plurality of passers-by, and the plurality of passers-by acquired by the passer-by image acquisition unit. Moving object image area specifying means for detecting a moving object included in the image and specifying the image area of the moving object, and passes through the front of the display screen only from the image area specified by the moving object image area specifying means Individual attribute information of each of a plurality of passers-by is acquired.

  Further, the present invention is the above invention, wherein the personal attribute information acquisition means, when there is no passerby passing through the front of the display screen in the image acquired by the passer-by image acquisition means, the image A background image acquisition means for acquiring the image as a background image, and a personal attribute information acquisition update means for updating the personal attribute information acquisition means based on the background image acquired by the background image acquisition means. It is characterized by having.

  The present invention further includes content display control means for controlling the display of the content selected by the content selection means on the display screen, wherein the content display control means is controlled by the passerby image acquisition means. When an image of a passer-by passing through the front of the display screen is not acquired for a predetermined time, display of content on the display screen is limited, and the passer-by image acquiring means passes the front of the display screen. The restriction is canceled when an image of a passerby is acquired.

  Further, the present invention is the above invention, wherein the ideal characteristic value storage unit that stores in advance the range of the ideal characteristic value of the image acquired by the passer-by image acquisition unit and the passer-by image acquisition unit And a passer image acquisition control means for controlling the passer image acquisition means so that the characteristic value of the image falls within the range of the ideal characteristic value stored by the ideal characteristic value storage means. And

  The present invention also provides content suitable for the passers-by on the display screen when the passers pass through the front of the display screen of a display device arranged at a position that can be viewed by an unspecified number of passers-by. A content display control device that performs display control, and includes a passer-by image acquisition unit that acquires an image of the passer-by and a non-moving object that is not included in difference information between consecutive frame images acquired by the passer-by image acquisition unit. A non-moving object image part extracting unit that extracts the image part of the non-moving object, a non-moving object image part extracted by the non-moving object image part extracting unit, and a passer-by image acquired by the passer-by person image acquiring unit. An image difference acquisition unit that acquires a difference image from a background image that is a non-moving image, and a non-moving object detection hand that detects a non-moving object included in the difference image acquired by the image difference acquisition unit And a notification means for notifying that the non-moving object is present in the field of view of the passer-by image acquisition means when the non-moving object is detected for a predetermined time by the passer-by image acquisition means. To do.

The present invention also provides content suitable for the passers-by on the display screen when the passers pass through the front of the display screen of a display device arranged at a position that can be viewed by an unspecified number of passers-by. A content display control method in which a content display control device performs processing for display control,
From the individual attribute information acquisition step of acquiring individual attribute information of each of a plurality of passers who pass through the front of the display screen, and from the individual attribute information of the plurality of passersby acquired by the individual attribute information acquisition step, Macro attribute information determining step for determining macro attribute information for all of the plurality of passers-by, and using the macro attribute information determined by the macro attribute information determining step, a plurality of contents to be displayed on the display screen A content selection step of searching a content database stored in association with each of the macro attribute information and selecting a content to be displayed on the display screen.

  The present invention also provides content suitable for the passers-by on the display screen when the passers pass through the front of the display screen of a display device arranged at a position that can be viewed by an unspecified number of passers-by. A content display control program for causing a computer device to execute a display control process, the personal attribute information acquisition procedure for acquiring individual attribute information of each of a plurality of passers-by passing through the front of the display screen, and the personal attribute information A macro attribute information determination procedure for determining macro attribute information of the plurality of passers-by from the individual attribute information of the plurality of passers acquired by the acquisition procedure; and a macro attribute determined by the macro attribute information determination procedure. Content that uses information to store a plurality of contents to be displayed on the display screen in association with the macro attribute information. Searching for database, a content selection procedure for selecting the content to be displayed on the display screen, characterized in that to be executed by the computing device.

  According to the present invention, since content corresponding to macro attribute information of a plurality of passers-by is selected and displayed on the display screen, it is possible to display the optimum content when the plurality of passers-by are viewed as a whole. It becomes possible, and there exists an effect that the content display effect with respect to a several passerby improves.

  Further, according to the present invention, the personal attribute information of a plurality of passers-by is stored for each passer-by, and the macro attributes of all the passers-by from the personal attribute information for all the passers-by stored for a predetermined time. Since the information is determined, it is possible to estimate the macro attribute information of future passers-by from the macro attribute information of multiple past passers-by, and it is possible to display the optimal content for future passers-by Thus, the content display effect is improved.

  Further, according to the present invention, the face attributes of the plurality of passersby, the size of each face, the direction to the display screen for each face, and the direction to the display screen for each face are detected at a predetermined angle or more. Time information is acquired as personal attribute information, so even if the passer's direction (walking front direction) and the positional relationship between the passer-by and the display screen are diverse, information about the faces of multiple passers-by is displayed. It has the effect of becoming possible.

  Further, according to the present invention, among the personal attribute information for each passerby stored for a predetermined time by the personal attribute information storing means, the number of passersby whose passer's face angle is equal to or greater than the predetermined angle, personal attributes Since the total number of passers-by and the content visibility of personal attribute information stored in the information storage means for a predetermined time are calculated, it is possible to select content based on these numerical values and indicators related to a plurality of passers-by The effect of becoming.

  Further, according to the present invention, the content selection means selects the content when it is determined that the predetermined condition or a combination of the predetermined conditions is satisfied. Therefore, the content selection means efficiently selects the content and selects the effective content. The effect that it becomes possible to display it is produced.

  Further, according to the present invention, the personal attribute information acquisition unit acquires the individual attribute information of each of a plurality of passers-by passing through the front of the display screen from the image area specified by the moving body image area specifying unit. It is possible to increase the efficiency and speed of the acquisition process of personal attribute information of passersby.

  Further, according to the present invention, the personal attribute information acquisition unit is updated based on the background image acquired by the background image acquisition unit. Therefore, the personal attribute information acquisition unit considers the background image. As a result, the personal attribute information of a plurality of passers-by can be obtained with higher accuracy.

  In addition, according to the present invention, when the passerby image is not acquired for a predetermined time, the display of the content on the display screen is restricted, so that the content is displayed on the display screen of the content display device. This is advantageous in that it is possible to reduce power consumption and extend the life of the content display device.

  According to the present invention, the passer-by image acquisition unit is controlled so that the characteristic value of the image acquired by the passer-by image acquisition unit falls within the range of the ideal characteristic value stored in the ideal characteristic value storage unit. Therefore, there is an effect that it becomes possible to automatically control the passer-by image acquisition means and acquire an image having an optimum characteristic value.

  According to the present invention, the non-moving object image portion that is not included in the difference information of the images of successive frames acquired by the passerby image acquisition means is extracted, and the non-moving object included in the difference image from the background image is extracted. When the predetermined time is detected, the non-moving object is informed in the field of view of the passerby image acquisition means, so the front of the display screen of the display device arranged at a position that can be seen by an unspecified number of passersby is displayed. When the passer-by passes, the content suitable for the passer-by is displayed on the display screen, and an effect that it is possible to find a dangerous abandoned item at an early stage.

  Exemplary embodiments of a content display control device, a content display control method, and a content display control program according to the present invention will be described below in detail with reference to the accompanying drawings. In addition, the content display control apparatus, the content display control method, and the content display control program according to the following first to sixth embodiments are used for roads and passages indoors and outdoors where many unspecified passers-by pass, particularly in station premises. The advertisement display to be displayed on the electronic display device capable of switching the display content installed at a position where the unspecified number of passers-by can be viewed is selected and displayed to the unspecified number of passers-by Is.

  Prior to the description of the embodiment, the outline and features of the present embodiment will be described. FIG. 1 is a diagram for explaining the outline and features of the present embodiment. As shown in the figure, for example, in order to capture an unspecified number of passers-by passing through a passage in a station from different directions, three cameras, a first camera to a third camera, are located in the vicinity of the display screen. is set up. The display screen is a display screen of an electronic display device whose display content can be switched, and an advertisement is displayed as the content. In the following embodiments, a case where a digital camera that has a CMOS image sensor and captures a moving image is employed as an imaging device is shown. However, the present invention is not limited to a digital camera, and any sensor that can acquire an image is an infrared sensor. Any sensor such as a sound wave sensor or a millimeter wave sensor may be employed.

  (1) First, the first camera to the third camera transmit images of an unspecified number of passersby to the content display control device. On the basis of the image, the content display control device recognizes six passers-by A to passers F as an unspecified number of passers-by.

  (2) Subsequently, the content display control apparatus acquires attributes of passersby A to passers F. Here, the attributes refer to individual attributes such as sex and age group. The attributes of the passers A to F acquired here are referred to as “personal attribute information”.

  According to FIG. 1, passer A is “female, 20s”, passer B is “female, 20s”, passer C is “female, 20s”, and passer D is “female. “20-year-old”, passer-by E gets “male, 20-year-old”, and passer-by F gets “personal attribute information” “female, 20-year-old”.

  (3) Subsequently, the content display control apparatus determines the macro attribute of the recognized passerby as a whole based on the “personal attribute information” of the passerby A to the passerby F. Here, the macro attribute is, for example, “personal attribute information” of passers-by A to F, which is the most “personal attribute information”. The macro attribute is hereinafter referred to as “macro attribute information”.

  According to FIG. 1, among 6 recognized passers-by, 5 passers-by with “personal attribute information” of “female, 20s”, “personal attribute information of“ male, 20s ” Is one passer-by. Accordingly, the “macro attribute information” of passers-by A to F is determined to be “female, 20 years old”.

  (4) Subsequently, the content display control device selects content whose target matches the “macro attribute information” of passers-by A to F, and controls display on the display screen. It is assumed that the content display control device stores content data and “macro attribute information” targeted by the content data in advance in association with each other.

  (5) On the display screen, content that matches the “macro attribute information” of passers-by A to F is displayed on the display screen. That is, content targeted for “female, 20s” is displayed on the display screen.

  In the above example, only the passer E among the passers A to F has “personal attribute information” of “male, 20 years old”. On the other hand, the other five people all have “personal attribute information” of “female, 20 years old”. Therefore, it is more overwhelming for passers-by A to F to display content targeting “female, 20s” than to display content targeting “male, 20s”. The content display effect is high.

  In particular, if the content is an advertisement that a company puts out for the purpose of promoting product sales, rather than displaying an advertisement targeting one “male, 20-year-old”, 5 “female, 20-year-old” Displaying an advertisement targeting “Taiwan” overwhelmingly increases the effectiveness of the advertisement. In other words, if a macro effect is pursued, a single “male, 20s” passerby will not be targeted, but a large number of “women, 20s” will be displayed. is there.

  As described above, the present embodiment selects the content corresponding to the “macro attribute information” of the plurality of passers-by and controls the display on the display screen, so that it is optimal when the plurality of passers-by are viewed macroscopically. The present invention has been devised for the purpose of displaying content and enhancing the content display effect for the plurality of passers-by.

  Embodiment 1 will be described below with reference to FIGS. In the first embodiment, the “macro attribute information” of the entire unspecified number of passers-by is determined based on the “personal attribute information” of each unspecified number of passers-by captured by the camera, and matches the “macro attribute information”. 1 shows a content display control apparatus that controls display of content to be processed.

  First, the configuration of the content display control apparatus according to the first embodiment will be described. FIG. 2 is a functional block diagram of the configuration of the content display control apparatus according to the first embodiment. As shown in the figure, the content display control apparatus 100a according to the first embodiment is connected to a first camera 111a, a second camera 111b, a third camera 111c, and a content display apparatus 112 having a content display screen 112a. Has been.

  The three cameras, the first camera 111a to the third camera 111c, are installed in the vicinity of the content display screen 112a so that an unspecified number of passers-by passing through the front of the content display screen 112a can be captured from different directions. Note that the number of cameras is not limited to three, but the unspecified number of passersby who pass through the front of the content display screen 112a can be imaged from any direction with a plurality of cameras. The acquisition accuracy of “personal attribute information” of the passerby increases.

  The content display control device 100a is an interface for receiving image data from the control unit 101, the storage unit 102, and the first camera 111a to the third camera 111c and transmitting the content data to the content display device 112. And an interface unit 103.

  The control unit 101 is a control device that controls the entire content display control device 100a. As a configuration related to the first embodiment, a personal attribute information acquisition processing unit 101a, a macro attribute information determination processing unit 101b, and a content selection It has a processing unit 101c and a content display control processing unit 101d.

  The personal attribute information acquisition processing unit 101a captures images of an unspecified number of passersby captured by the first camera 111a to the third camera 111c and transmitted at a fixed period of several tens of milliseconds to several hundreds of milliseconds. The passer-recognition module storage unit 102a of the passer-by-passenger storage unit 102a detects each passer-by's face by individually recognizing the image using the passer-by-passenger recognition module 102a. It is a processing unit that acquires “personal attribute information” based on the detected face attribute.

  The “personal attribute information” acquired here indicates personal characteristics such as gender, age group (or age), race, clothing color, and the like. The “personal attribute information” is estimated information estimated by the personal attribute information acquisition processing unit 101a.

  Furthermore, the personal attribute information acquisition processing unit 101a is configured to determine the size of each passer's face (hereinafter referred to as “face size”), the angle between each passer's face and the front direction of the content display screen 112a ( Hereinafter, the processing unit calculates a “face angle” and a time during which the angle is equal to or greater than a predetermined value (hereinafter referred to as “gaze time”).

  Note that the time when the angle formed by each passer's face and the content display screen 112a is equal to or greater than a predetermined value is that each passerby is viewing or gazing at the content displayed on the content display screen 112a. It is a time that can be considered. The “face size”, “face angle”, and “gaze time” may all be included in the “personal attribute information”.

  Then, the personal attribute information acquisition processing unit 101a stores the “personal attribute information” of each acquired passerby for each passerby, a passer-by personal attribute information accumulation DB (Data Base, database, hereinafter) in the storage unit 102 to be described later. Similarly, the data is stored in 102b.

  The macro attribute information determination processing unit 101b determines the “macro attribute information” of the entire unspecified number of passers-by based on the “personal attribute information” of each unspecified number of passers-by acquired by the personal attribute information acquisition processing unit 101a. To decide. For example, as a typical example, the most common combination of “gender” and “age group” among “personal attribute information” of an unspecified number of passersby who acquired “personal attribute information”. There is a method of “macro attribute information”.

  This is because the tendency of content preference generally differs depending on “personal attribute information” or “macro attribute information” determined by a combination of “sex” and “age group”. For example, for a passerby whose “personal attribute information” is “male, in his 20s”, for example, by displaying an advertisement of “fashion for men and young people”, it is easy to attract the attention of the passerby and increase the advertising effect. It can be pulled out more. However, for example, even if an advertisement of “cosmetics” is displayed to a passerby whose “personal attribute information” is “male, 20 years old”, there is hardly any attention and no advertising effect can be drawn out. .

  On the other hand, for passers-by whose “personal attribute information” is “female, 20-year-old”, for example, by displaying an advertisement of “cosmetics”, it is easy to attract the attention of the passer-by and the advertising effect is further improved. It can be pulled out. However, even if you display an ad for “fashion for men and young people” for passers-by whose “personal attribute information” is “female, 20-year-old”, for example, there is almost no attention and no advertising effect. It cannot be pulled out.

  And when “personal attribute information” of an unspecified number of passers-by is acquired, the most common combination among “sex” and “age group” is determined as “macro attribute information”, The content whose target matches the “macro attribute information” is displayed.

  For example, suppose that “male, 20-year-old” for one person and “female, 20-year-old” for five persons are acquired as “personal attribute information” of an unspecified number of passersby. At this time, by focusing on the passers-by of “female, 20-year-old” and displaying, for example, an advertisement of “cosmetics”, it is easy to attract more passers-by and more effective advertising.

  However, even if the target is “male, 20-year-old” passers-by, for example, an advertisement for “fashion for male youth” is displayed, one of a total of 6 unspecified passers-by “ Although it attracts the attention of passers of "Men, 20's", it rarely attracts the attention of other 5 "Women, 20's" passersby. If you look at it, you can not get the advertising effect.

  In this way, by determining the “macro attribute information” of an unspecified number of passers-by and displaying the content that matches the target with this “macro attribute information”, the entire unspecified number of passers-by can be viewed in macro form. As a result, it becomes possible to bring out the advertising effect more.

  The method for determining the “macro attribute information” is merely an example, and depending on the situation, for example, “sex” or “age group” is the most common “personal attribute information”. The “information” may be determined as “macro attribute information”. That is, the most “personal attribute information” may be determined as “macro attribute information” by at least one of the items of “personal attribute information” or a combination of the items.

  In addition, the value of each item of “personal attribute information” is quantified and averaged, or the mode value of each item of “personal attribute information” is collected to determine “macro attribute information”. The determination method of the “macro attribute information” is not limited to one, and various methods can be used to enhance the content display effect by looking at the whole unspecified number of passersby in a macro manner. it can.

  The content selection processing unit 101c determines whether or not a “content display condition” stored in a content display condition storage unit 102c of the storage unit 102 to be described later is satisfied, and the “content display condition” is satisfied. Only when it is determined, a process for selecting content that matches the “macro attribute information” determined by the macro attribute information determination processing unit 101b from each content data stored in the content DB 102d of the storage unit 102 to be described later To do.

  The content display control processing unit 101d transmits the content data of the content selected by the content selection processing unit 101c to the content display device 112 so as to be displayed on the content display screen 112a via the communication interface unit 103.

  The storage unit 102 includes a passer-by identification module storage unit 102a that stores a passer-by identification module, a passer-by personal attribute information storage DB 102b that stores a passer personal attribute information storage table, and a content display that stores “content display conditions”. This is a non-volatile storage device having a condition storage unit 102c and a content DB 102d for storing content data. The passer-by personal attribute information storage DB 102b and the content DB 102d are databases in which data is registered, extracted and updated by a DBMS (Data Base Management System) (not shown).

  The passerby recognition module stored in the passerby recognition module storage unit 102a receives the image data, recognizes the person and the face of the person included in the image data, and sets the attributes of the person and the face of the person individually. This is a program that includes an algorithm to output to

  The passer-by individual attribute information storage table stored in the passer-individual attribute information storage DB 102b is data for storing the “personal attribute information” of each passer-by acquired by the personal attribute information acquisition processing unit 101a. It is a table.

  The passer-by personal attribute information storage DB 102b is arranged in a data center remote from the content display control apparatus 100a, and receives and stores "personal attribute information" from the content display control apparatus 100a via a secure network. You may take In this way, resource compression in the storage unit 102 of the content display control apparatus 100a can be prevented.

  In addition, since the data center can receive and store “personal attribute information” from a plurality of content display control devices 100a installed in various places via a secure network, a large number of content display control devices 100a. By integrating the “personal attribute information” acquired by the system and combining it with weather information that can be acquired separately and event information in the surrounding area, you can obtain reference information for posting new content and reference information for content display pricing. .

  The “content display condition” stored in the content display condition storage unit 102 c is a condition for determining whether or not to perform a process of selecting a content for display on the content display screen 112 a of the content display device 112. Unless the “content display condition” is satisfied, the content selection processing unit 101c does not perform the content selection process, so that it is possible to reduce waste of unnecessary content display and display switching.

  Further, the content data stored in the content display condition storage unit 102c is stored in association with “macro attribute information” targeted by each content. The content selection processing unit 101c selects the content data associated with the “macro attribute information” that matches the “macro attribute information” determined by the macro attribute information determination processing unit 101b, so that an unspecified number of traffics It is possible to display more appropriate content by looking at the whole person in a macro manner.

  Note that the content stored in the content display condition storage unit 102c includes, for example, in addition to the advertisement, station guidance information, train delay information, if the content is an advertisement installed in the station concourse, Weather information or disaster information may be included. These pieces of information are information displayed on the content display screen 112a in an initial state where no advertisement is displayed on the content display screen 112a of the content display device 112.

  Next, the passer-by personal attribute information storage table stored in the passer-person personal attribute information storage DB 102b shown in FIG. 2 will be described. FIG. 3 is a diagram illustrating an example of a passer-by personal attribute information accumulation table. As shown in the figure, the passer-person personal attribute information accumulation table includes at least “passer-by ID”, “person recognition start date”, “face detection start date”, “face angle detection time”, “ It has columns of “gender”, “age group”, “face attribute”, “face angle”, and “gaze time”.

  The “passerby ID” is identification information for uniquely identifying the passerby whose “personal attribute information” has been acquired by the personal attribute information acquisition processing unit 101a. “Person recognition start date and time” is the date and time when the passerby identified by the “passerby ID” was recognized by the personal attribute information acquisition processing unit 101a.

  The “face detection start date / time” is the date / time when the personal attribute information acquisition processing unit 101a started to recognize the passerby face identified by the “passer ID”. The “face angle detection time” is the date and time when the personal attribute information acquisition processing unit 101a starts to recognize the face of the passerby identified by the “passerby ID”.

  Further, the “sex”, “age group”, “face attribute”, “face angle”, and “gaze time” are identified by the “passer ID” acquired by the personal attribute information acquisition processing unit 101a. It is “personal attribute information” of a person.

  In this way, by storing and storing “personal attribute information” of each passerby who has acquired “personal attribute information” together with “person recognition start date and time”, “face detection start date and time”, and “face angle detection time” Based on the “personal attribute information” of the past unspecified number of passers-by, it becomes possible to predict the “personal attribute information” of the future unspecified number of passers-by.

  For example, it is possible to predict how many unspecified passers-by having “personal attribute information” pass through the front of the content display screen 112a during a certain time period. become. Furthermore, when “person recognition start date and time” and “face detection start date and time” are linked with calendar information, an unspecified number of passersby having “personal attribute information” in a certain time zone of a certain day of the week, It is also possible to predict how many of the content display screen 112a will pass in front of.

  In this way, by storing the “personal attribute information” of each unspecified number of passers in the past in the passer-person personal attribute information storage DB 102b, the “macro attribute information” of all future unspecified number of passers-by can be obtained. Since it can be predicted, what kind of “macro attribute information” is associated with the content can be determined if the content DB 102d is prepared and can be selected by the content selection processing unit 101c. become.

  Next, “content display conditions” stored in the content display condition storage unit 102c illustrated in FIG. 2 will be described. FIG. 4 is a diagram illustrating an example of a content display condition table. As shown in the figure, in the content display condition table, “content display conditions” are listed in order of priority for determining condition satisfaction.

  One or more “content display conditions” selected as determination conditions for determining whether or not to perform content selection processing from the “content display conditions” listed in the content display condition table are displayed on the content display screen 112a. This is called a “scenario condition” for displaying.

  Only when all the “content display conditions” included in the “scenario conditions” have condition satisfaction, the contents are displayed on the content display screen 112a. The priority of the “content display condition” is also valid in the “scenario condition”, and the condition satisfaction is determined by giving priority to the “content display condition” having a higher priority. Then, when it is determined that the condition of “content display condition” is not satisfied, the subsequent determination of the condition satisfaction of “content display condition” is stopped. In this way, it is possible to speed up the process of determining whether or not the “content display condition” included in the “scenario condition” satisfies the condition satisfaction.

  For example, “gaze time” which is the “content display condition” of the first priority is “gaze time” of an unspecified number of passers by whom “personal attribute information” is acquired by the personal attribute information acquisition processing unit 101a. This is a “content display condition” in which content is displayed on the content display screen 112a when the sum of the above is equal to or longer than a predetermined time.

  In addition, “passenger macro attribute information”, which is the “content display condition” of the second highest priority, has content that matches the “macro attribute information” determined by the macro attribute information determination processing unit 101b in the content DB 102d. This is a “content display condition” in which content is displayed on the content display screen 112a when stored.

  Further, the “content display condition” of the third highest priority “display screen gaze number for each personal attribute information” is an unspecified number of “personal attribute information” acquired by the personal attribute information acquisition processing unit 101a. This is a “content display condition” in which content is displayed on the content display screen 112a when the total “gaze time” for each “personal attribute information” of the “gaze time” of the passerby is equal to or greater than a predetermined value.

  The “content display condition” of the fourth highest priority “display screen gaze number per fixed time” is an unspecified number of traffics for which “personal attribute information” has been acquired by the personal attribute information acquisition processing unit 101a. Content display screen 112a when the total number of passers who can be considered to be watching the content display screen 112a is greater than or equal to a predetermined value when the “face angle” is a predetermined angle or more per certain time. This is a “content display condition” in which content is displayed on the screen.

  In addition, “face angle” which is the “content display condition” of the fifth highest priority is “face” among an unspecified number of passers-by whose “personal attribute information” has been acquired by the personal attribute information acquisition processing unit 101a. This is a “content display condition” in which content is displayed on the content display screen 112a when there is a passerby who can be regarded as watching the content display screen 112a because the “angle” is equal to or greater than a predetermined angle.

  In addition, “face size (distance between the face and the display screen)” which is the “content display condition” of the sixth priority is unspecified in which “personal attribute information” is acquired by the personal attribute information acquisition processing unit 101a. Among the many passers-by, if there is a passerby whose “face size” is equal to or larger than a predetermined size and can be regarded as being close to the content display screen 112a, “content” is displayed on the content display screen 112a. Display conditions ”.

  The “number of passers-by” as the “content display condition” of the seventh highest priority is the number of unspecified passers whose “personal attribute information” is acquired by the personal attribute information acquisition processing unit 101a. This is a “content display condition” in which content is displayed on the content display screen 112a when the number is equal to or greater than the predetermined number.

  The “viewing rate” which is the “content display condition” of the eighth highest priority is “face” of all unspecified passers whose “personal attribute information” is acquired by the personal attribute information acquisition processing unit 101a. This is a “content display condition” in which content is displayed on the content display screen 112a when a “viewing rate” obtained by dividing the number of people whose angle is equal to or greater than a predetermined angle by the number of all passers is equal to or greater than a predetermined ratio.

  In addition, “content provider”, which is the “content display condition” of the ninth highest priority, is the content DB 102d in which content whose “content provider” is a specific person, a specific corporation, or a specific group is stored. The “content display condition” is to display the content of the “content provider” on the content display screen 112a. A “content provider” is an advertiser when the content is an advertisement.

  Further, the “medium sales price or unit price” which is the “content display condition” of the tenth priority is when content whose “medium sales price or unit price” is equal to or higher than a predetermined price is stored in the content DB 102d. This is a “content display condition” for displaying content whose “medium sales price or unit price” is equal to or higher than a predetermined price on the content display screen 112a.

  The “meteorological information”, which is the “content display condition” of the eleventh priority, is a “content display condition” that causes the content display screen 112a to display content when the current weather is a specific weather. is there. According to the “content display condition” of “weather information”, for example, when the current weather is outside air temperature of 30 ° C. or higher, an advertisement for a cold drink is displayed, or when the current weather is raining, It is effective to display rain gear advertisements.

  The “calendar information”, which is the “content display condition” of the twelfth priority, is the “content display condition” in which content is displayed on the content display screen 112a when the current date is a specific date. . “Calendar information” includes seasonal event dates such as seasons, days of the week, holidays, and Christmas.

  According to the “content display condition” called “calendar information”, for example, an advertisement for businessmen is displayed when the current date and time is a weekday, or an advertisement for family members when the current date and time is a holiday. It is effective to do. It is also effective to display an advertisement for summer clothing if the current date and time is in the “summer” season, or to display an advertisement for winter clothing if the current date and time is in the “winter” season.

  For example, the “scenario condition” is “gaze time” which is “content display condition” of the first priority, “passerby macro attribute information” which is “content display condition” of the second priority, and It is assumed that “face size (distance between face and display screen)” which is the “content display condition” of the sixth priority is included.

  In this case, the sum of the “gazing time” of an unspecified number of passers-by whose “personal attribute information” has been acquired by the personal attribute information acquisition processing unit 101a is equal to or greater than a predetermined time, and is determined by the macro attribute information determination processing unit 101b. Content matching the “macro attribute information” is stored in the content DB 102d, and “face size” among the unspecified number of passers-by whose “personal attribute information” is acquired by the personal attribute information acquisition processing unit 101a. Is equal to or larger than a predetermined size and there is a passerby who can be considered to be close to the content display screen 112a, the content matching the “macro attribute information” is displayed on the content display screen 112a.

  The “content display conditions” are not limited to those described above, and may be various conditions. For example, “a time during which the face size (the distance between the face and the display screen) is equal to or larger than a predetermined size” is added as the “content display condition”. “The time when the face size (distance between the face and the display screen) is equal to or larger than the predetermined size” means that “the time when the face size (distance between the face and the display screen) is equal to or larger than the predetermined size” is equal to or longer than the predetermined time. The “content display condition” is that the content is displayed on the content display screen 112a.

  Then, when “time when face size (distance between face and display screen) is equal to or larger than predetermined size” is selected as “scenario condition”, and at least one passerby is satisfied when the condition of “scenario condition” is satisfied However, since it can be considered that the content display screen 112a stays at a certain distance for a predetermined time or more, it is possible to add another content to the currently displayed content for display.

  It is also possible to divide the content display screen 112a and display a plurality of contents simultaneously according to the “scenario conditions”. Furthermore, it is possible to control the display timing of the content on the content display screen 112a according to the “scenario condition”.

  For example, when different “personal attribute information” is acquired at the same time, a plurality of contents corresponding to each “personal attribute information” may be displayed simultaneously by dividing the content display screen 112a. Alternatively, the content corresponding to the “personal attribute information” of the passerby may be displayed on the content display screen 112a only after the “gaze time” becomes equal to or longer than the predetermined time.

  Next, the content storage table stored in the content DB 102d shown in FIG. 2 will be described. FIG. 5 is a diagram illustrating an example of a content storage table. As shown in the figure, the content storage table includes “content ID” for uniquely identifying content, “macro attribute information” targeted by the content, and image data of a moving image or still image of the content. It has a column of “content data”.

  Next, content display control processing performed by the content display control apparatus 100a shown in FIG. 2 will be described. FIG. 6 is a flowchart showing a content display control processing procedure. The content display control process is activated and executed at a predetermined cycle.

  As shown in the figure, first, the personal attribute information acquisition processing unit 101a uses the passer-by recognition module to determine whether or not a passer-by is recognized from images acquired by the first camera 111a to the third camera 111c. Is determined (step S101). When it is determined that the passerby has been recognized (Yes at Step S101), the process proceeds to Step S102, and when it is not determined that the passerby has been recognized (No at Step S101), Step S101 is repeated.

  In step S102, the personal attribute information acquisition processing unit 101a acquires personal attribute information for each passerby using the passerby recognition module. Subsequently, the personal attribute information acquisition processing unit 101a registers the acquired personal attribute information for each passer-by in the passer-by personal attribute information storage DB 102b (step S103).

  Subsequently, the personal attribute information acquisition processing unit 101a determines whether or not personal attribute information of a plurality of passersby has been acquired (step S104). When it is determined that personal attribute information of a plurality of passers-by has been acquired (Yes at Step S104), the process proceeds to Step S105, and when it is determined that personal attribute information of a plurality of passers-by has not been acquired (No at Step S104). ), The process proceeds to step S110.

  In step S105, the macro attribute information determination processing unit 101b determines “macro attribute information” of the plurality of passers-by based on the personal attribute information of the plurality of passers-by. Subsequently, the content selection processing unit 101c determines whether or not the “scenario condition” is satisfied (step S106). When it is determined that the “scenario condition” is satisfied (Yes at Step S106), the process proceeds to Step S107, and when it is not determined that the “scenario condition” is satisfied (No at Step S106), Step S101 is performed. Move on.

  In step S107, the content selection processing unit 101c searches the content DB 102 using the determined “macro attribute information” as a key. Subsequently, the content selection processing unit 101c determines whether or not the search in Step S107 has been hit (Step S108). If it is determined that the search in step S107 is hit (Yes in step S108), the process proceeds to step S109. If it is not determined that the search in step S107 is hit (No in step S108), the process proceeds to step S101.

  In step S109, the content display control processing unit 101d transmits the content data of the content selected by the content selection processing unit 101c to the content display device 112 via the communication interface unit 103. The content display device 112 that has received the content data displays the content on the content display screen 112a. When this process ends, the content display control process ends.

  On the other hand, in step S110, the content selection processing unit 101c determines whether or not the “scenario condition” is satisfied. When it is determined that the “scenario condition” is satisfied (Yes at Step S110), the process proceeds to Step S111. When it is not determined that the “scenario condition” is satisfied (No at Step S110), Step S101 is performed. Move on.

  In step S111, the content selection processing unit 101c searches the content DB 102d using the “personal attribute information” acquired in step S102 as a key. When this process ends, the process moves to step S108.

  By performing the above processing, the “macro attribute information” is determined from the “personal attribute information” of an unspecified number of passersby, and the content that matches the determined “macro attribute information” is displayed. The content display effect when the entire passerby is viewed in macro is further improved.

  Next, a content display restriction process and a display restriction release process performed by the content display control apparatus 100a shown in FIG. 2 will be described. FIG. 7 is a flowchart showing the procedure of the content display restriction process and the display restriction release process. The content display restriction process and the display restriction release process are activated and executed at a predetermined cycle.

  First, the content display control processing unit 101d determines whether or not content is being displayed on the content display screen 112a (step S121). When it is determined that the content is being displayed on the content display screen 112a (Yes at Step S121), the process proceeds to Step S122, and when it is not determined that the content is being displayed on the content display screen 112a (No at Step S121). ), The content display restriction process and the display restriction release process are terminated.

  In step S122, the personal attribute information acquisition processing unit 101a determines whether or not a passerby has been recognized from the images acquired by the first camera 111a to the third camera 111c. If it is determined that the passerby has been recognized (Yes at Step S122), the process proceeds to Step S123. If the passerby is not recognized (No at Step S122), the process proceeds to Step S125.

  In step S123, the content display control processing unit 101d determines whether it is timed by a timer. When it is determined that the time is being measured by the timer (Yes at Step S123), the process proceeds to Step S124, and when it is not determined that the time is being measured by the timer (No at Step S123), the content display restriction process and the display restriction are released. The process ends.

  In step S124, the content display control processing unit 101d ends the time measurement by the timer. When this process ends, the content display restriction process and the display restriction release process end.

  On the other hand, in step S125, the content display control processing unit 101d determines whether it is timed by a timer. When it is determined that the time is being measured by the timer (Yes at Step S125), the process proceeds to Step S127. When it is not determined that the time is being measured by the timer (No at Step S125), the process proceeds to Step S126.

  In step S126, the content display control processing unit 101d starts time measurement using a timer. Subsequently, the content display control processing unit 101d determines whether or not a predetermined time has elapsed from the start of time measurement by the timer (step S127). When it is determined that a predetermined time has elapsed from the start of time measurement by the timer (Yes at Step S127), the process proceeds to Step S128, and when it is not determined that the predetermined time has elapsed from the start of time measurement by the timer (No at Step S127), Step S122. Move on.

  In step S128, the content display control processing unit 101d sets the display of the content display screen 112a to the “initial state”. Here, the “initial state” is, for example, station information other than the advertisement, train delay information, weather information, disaster information, or the like when the content is an advertisement installed in the station concourse. These pieces of information are displayed on the content display screen 112a in an “initial state” in which no advertisement is displayed on the content display screen 112a of the content display device 112.

  Subsequently, the content display control processing unit 101d ends the time measurement by the timer (step S129). When this process ends, the content display restriction process and the display restriction release process end.

  The content displayed on the content display screen 112a when the passerby is not recognized for a predetermined time from the images acquired by the first camera 111a to the third camera 111c by the content display restriction process and the display restriction release process. By restricting display (for example, setting the display to the “initial state”, deleting all the display, etc.), the power consumption consumed by the content display device 112 and the burn-in of the content display screen 112a are prevented. The product life of the display device 112 can be extended.

  Hereinafter, Example 2 will be described with reference to FIGS. 8 and 9. In the second embodiment, for each “personal attribute information” of an unspecified number of passers-by captured by the camera, the ratio of passers-by determined that the face direction is facing the content display screen is expressed as “content visibility rate”. The content display control apparatus calculated as "is shown. Example 2 is an example in which the content display control apparatus 100a according to Example 1 is added with a configuration for calculating a “content visibility ratio”. In the description of the second embodiment, only differences from the first embodiment will be described.

  First, the configuration of the content display control apparatus according to the second embodiment will be described. FIG. 8 is a functional block diagram of the configuration of the content display control apparatus according to the second embodiment. The content display control device 100b according to the second embodiment adds a display screen visualizing pass number calculation processing unit 101e and a content visual recognition rate calculation processing unit 101f to the control unit 101 of the content display control device 100a according to the first embodiment. It is a configuration.

  Note that, in the control unit 101 of the content display control apparatus 100b, the display screen viewing passers calculation processing unit 101e and the content viewing rate calculation processing unit 101f are between the personal attribute information acquisition processing unit 101a and the content selection processing unit 101c. The processing order of the macro attribute information determination processing unit 101b, the display screen viewing passer count calculation processing unit 101e, and the content visibility rate calculation processing unit 101f is not limited.

  The display screen viewing passer-by-passenger calculation processing unit 101e stores “personal attribute information” in the passer-personal attribute information storage table of the passer-personal attribute information storage DB 102b, and the “face detection start date and time” is within a predetermined time range. The total number of passers in the list (hereinafter referred to as “total passers”) is totaled for each “personal attribute information” such as “sex” and “age group”.

  In addition, the display screen viewing passer count calculation processing unit 101e stores “personal attribute information” in the passer personal attribute information storage table of the passer personal attribute information storage DB 102b, and “face detection start date” is a predetermined time. The total number of passersby whose “face angle” is within the specified range (hereinafter referred to as “gaze number”) for each “personal attribute information” such as “sex” and “age group” To do.

  The content viewing rate calculation processing unit 101f calculates the total number of “gazing people” for each “personal attribute information” such as “gender” and “age group” calculated by the display screen viewing passer-by-number calculation processing unit 101e. Divide by “passers” to calculate “content visibility”.

  Note that a “content recognition rate” column may be provided in the content storage table of the content DB 102d, and the “content recognition rate” for each content may be stored. In this case, the content viewing rate calculation processing unit 101f adds the calculated “content recognition rate” to the record corresponding to the content currently displayed on the content display screen 112a in the content storage table.

  In this way, by calculating the “content visibility” for each “personal attribute information” for each content and storing the content in association with the content in the content storage table of the content DB 102d, the display effect of the content can be measured. become.

  Next, content recognition rate calculation processing executed by the content display control device 100b shown in FIG. 8 will be described. FIG. 9 is a flowchart showing a content recognition rate calculation processing procedure. As shown in the figure, first, the display screen viewing passer-by-passenger calculation processing unit 101e refers to the passer-by person personal attribute information accumulation DB 102b, sets an extraction period of “face detection start date and time”, and the corresponding extraction period The number of records (α) is counted for each “personal attribute information” (step S131).

  Subsequently, the display screen viewing passer count calculation processing unit 101e refers to the passer-by person personal attribute information storage DB 102b, sets the same “face detection start date and time” extraction period as in step S131, and sets the corresponding extraction period. The number of records (β) whose “face angle” is equal to or greater than a predetermined angle is counted for each “personal attribute information” (step S132).

  Subsequently, the display screen viewing passer count calculation processing unit 101e calculates β / α for each “personal attribute information” as the “content recognition rate” (step S133). Subsequently, in the content storage table of the content DB 102d, the “content recognition rate” for each “personal attribute information” is added to the content record for which the “content recognition rate” has been calculated in step S133 (step S134).

  In this way, by calculating the “content recognition rate” for each “personal attribute information” and adding the “content recognition rate” for each “personal attribute information” to the content record in the content storage table, Based on the “macro attribute information” of the target of the content assumed by the “user” and the “content visibility” for each “personal attribute information” of the passerby who actually viewed the content. It becomes possible to verify the validity of the assumption of the target attribute of the content and the display effect of the content.

  Example 3 will be described below with reference to FIGS. 10 to 12-2. In the third embodiment, a content display control device that identifies a moving body part of an image first and performs face detection only on the moving body part when detecting the faces of an unspecified number of passersby captured by a camera is shown. In the third embodiment, the personal attribute information acquisition processing unit 101a of the control unit 101 of the content display control apparatus 100a according to the first embodiment includes a passerby image acquisition processing unit 101g and a moving object image area specifying processing unit 101h. It is an example. In the description of the third embodiment, only differences from the first embodiment will be described.

  First, the configuration of the content display control apparatus according to the third embodiment will be described. FIG. 10 is a functional block diagram of the configuration of the content display control apparatus according to the third embodiment. In the content display control device 100c according to the third embodiment, the personal attribute information acquisition processing unit 101a of the control unit 101 of the content display control device 100a according to the first embodiment includes a passerby image acquisition processing unit 101g and a moving body image region specifying process. Part 101h.

  The passerby image acquisition processing unit 101g acquires images of an unspecified number of passersby captured by the first camera 111a to the third camera 111c. In addition, the moving object image region specifying processing unit 101h specifies a moving object (moving object image) image region from the image acquired by the passerby image acquisition processing unit 101g.

  Next, face detection processing executed by the content display control device 100c shown in FIG. 10 will be described. FIG. 11 is a flowchart showing a face detection processing procedure. As shown in the figure, first, the moving body image region specifying processing unit 101h sets the size of the “minimum face detection area” (step S141). Here, the “minimum face detection area” is the minimum unit of an image scanning block that scans an image to detect a face.

  Subsequently, the moving body image area specifying processing unit 101h sets a “face detection image area” (step S142). Here, the “face detection image area” is a minimum rectangular area surrounding the minimum area of the face to be detected.

  Subsequently, the moving body image region specifying processing unit 101h acquires an image frame to be a face detection target (sets an image frame) from the passerby image acquisition processing unit 101g (step S143). Subsequently, the moving body image region identification processing unit 101h initializes the face detection start position of the “minimum face detection area” (step S144).

  Subsequently, the moving body image region specifying processing unit 101h performs face detection processing in the image frame set in step S143 (step S145). Subsequently, the moving body image region specification processing unit 101h moves the face detection position of the “minimum face detection area” by one unit according to the order (step S146).

  Subsequently, the moving object image area specifying processing unit 101h determines whether or not a moving object exists in the “minimum face detection area” at the current face detection position (step S147). Specifically, a moving object in an image is detected by calculating a difference between successive frames and performing image processing. When it is determined that there is a moving object (Yes at Step S147), the process proceeds to Step S148, and when it is not determined that a moving object exists (No at Step S147), the process proceeds to Step S146.

  In step S148, the moving body image region specification processing unit 101h determines whether or not the current face detection position of the “minimum face detection area” has reached the end of the “face detection image area”. When it is determined that the current face detection position has reached the end of the “face detection image area” (Yes at step S148), the process proceeds to step S149, and the current face detection position reaches the end of the “face detection image area”. If it is not determined that the determination has been made (No at step S148), the process proceeds to step S145.

  In step S149, the moving body image region identification processing unit 101h acquires the next face detection target image frame from the passerby image acquisition processing unit 101g (sets the next image frame). Subsequently, the moving body image region identification processing unit 101h determines whether or not the setting of the image frame is stopped, that is, the next face detection target image frame is no longer acquired from the passerby image acquisition processing unit 101g ( Step S150).

  When it is determined that the set of image frames has been stopped (Yes at Step S150), the face detection process ends, and when it is not determined that the set of image frames has been stopped (No at Step S150), the process proceeds to Step S144. .

  The outline of the face detection process described above will be described with reference to FIGS. 12-1 and 12-2. FIG. 12A is a diagram for explaining an overview of a conventional face detection process. In the conventional face detection processing, face detection and detection position operation processing are performed over this wide “face detection image area” with almost the entire area of the image frame as the “face detection image area”. Since the face detection processing has a large processing load, it takes a lot of processing time to perform the face detection processing over almost the entire region of the image frame.

  However, as shown in FIG. 12-2, which illustrates the outline of the face detection process according to the third embodiment, the face detection process is performed by setting only a portion where a moving object is detected in the image frame as a “face detection image area” Only in this limited “face detection image area”, face detection and detection position operation processing are performed. For this reason, the processing load of the face detection process can be reduced and the processing time can be shortened.

  Embodiment 4 will be described below with reference to FIGS. Example 4 shows a content display control device that tunes a passer-by recognition module used when acquiring “personal attribute information” of an unspecified number of passers-by captured by a camera so that the recognition accuracy becomes higher. . In the fourth embodiment, the personal attribute information acquisition processing unit 101a of the control unit 101 of the content display control apparatus 100a according to the first embodiment includes a background image acquisition processing unit 101i and a passer-by recognition module update processing unit 101j. It is an example. In the description of the fourth embodiment, only differences from the first embodiment will be described.

  First, the configuration of the content display control apparatus according to the fourth embodiment will be described. FIG. 13 is a functional block diagram of the configuration of the content display control apparatus according to the fourth embodiment. In the content display control device 100d according to the fourth embodiment, the personal attribute information acquisition processing unit 101a of the control unit 101 of the content display control device 100a according to the first embodiment includes a background image acquisition processing unit 101i and a passerby recognition module update process. Part 101j.

  The background image acquisition processing unit 101i determines whether or not a passerby is included in the images captured by the first camera 111a to the third camera 111c, and determines that no passerby is included in the captured image. In this case, the image is acquired as a background image.

  The passerby recognition module update processing unit 101j uses the background image acquired by the background image acquisition processing unit 101i to reconfigure and update the passerby recognition module stored in the passerby recognition module storage unit 102a. .

  Typically, the passer-by-passenger recognition module is configured to identify from the input image a passer-by and non-passer-by part, an algorithm for identifying the passer-by and the non-face-to-face part, and a “person” based on the identified passer-by and the passer-by's face. Data for outputting "attribute information". The passer-by recognition module is based on well-known technology provided by various vendors.

  In the fourth embodiment, since the camera field of view of the first camera 111a to the third camera 111c is fixed, the part other than the part where the passerby is shown in the image to be captured is basically a stationary part. is there. However, the luminance of the image is subtly changed as the luminance of the installation locations of the first camera 111a to the third camera 111c changes, or the reflection or reflection of illumination changes with the movement of passersby or other objects. May change. When this brightness change or reflection of an object placed in the camera field of view is similar to the face of a passerby (position similar to facial elements such as eyes, nose and mouth) There is a risk of misjudging as the face of a passerby.

  Therefore, in the fourth embodiment, the first camera 111a to the first camera 111a to the original image for creating an algorithm for identifying a passerby and a non-passerby part, a face of the passer-by and a non-facer part of the passer-by identification module. A background image that does not include a passerby and is acquired at a location where the three cameras 111c are arranged is included. By reconstructing the algorithm including this background image and updating the reconstructed passer-by recognition module, the accuracy of face recognition by the passer-by recognition module is improved.

  Next, a passerby recognition module update process executed by the content display control apparatus 100d shown in FIG. 13 will be described. FIG. 14 is a flowchart showing a passerby recognition module update processing procedure. As shown in the figure, first, the background image acquisition processing unit 101i determines whether or not a background image has been acquired (step S161).

  When it is determined that the background image has been acquired (Yes at Step S161), the process proceeds to Step S162. When it is not determined that the background image has been acquired (No at Step S161), Step S161 is repeated.

  By performing the above processing, the “personal attribute information” of an unspecified number of passersby is acquired by the passerby recognition module that takes into account the influence of the background image. Prevent false detection of personal information and personal attribute information.

  In step S162, the passerby recognition module update processing unit 101j reconfigures the passerby recognition module stored in the passerby recognition module storage unit 102a based on the background image acquired by the background image acquisition processing unit 101i. Subsequently, the passerby recognition module update processing unit 101j updates the passerby recognition module stored in the passerby recognition module storage unit 102a with the reconfigured passerby recognition module (step S163).

  Note that the reconfiguration of the passerby recognition module may be performed not by the content display control apparatus 100d but by the center server as shown in FIG. That is, (1) the background image acquired by the content display control device 100d is encrypted and transmitted to the center server via a secure network.

  (2) Subsequently, the center server receives a background image. (3) Subsequently, the center server selects non-face images (non-face images), and (4) reconstructs a passer-by recognition module using the selected background images. (5) Then, the center server transmits the reconfigured passer-by identification module to the content display control device 100d via a secure network. (6) Then, the content display control device 100d updates the existing passerby recognition module stored in the passerby recognition module storage unit 102a with the passerby recognition module reconstructed using the background image.

  Embodiment 5 will be described below with reference to FIGS. The fifth embodiment shows a content display control apparatus that performs automatic calibration of a camera. Example 5 is an example in which a configuration for performing automatic calibration of the first camera 111a to the third camera 111c is added to the content display control apparatus 100a according to Example 1. In the description of the fifth embodiment, only differences from the first embodiment will be described.

  First, the configuration of the content display control apparatus according to the fifth embodiment will be described. FIG. 16 is a functional block diagram of the configuration of the content display control apparatus according to the fifth embodiment. The content display control device 100e according to the fifth embodiment adds a camera characteristic value acquisition processing unit 101k and a camera characteristic value adjustment processing unit 101l to the control unit 101 of the content display control device 100a according to the first embodiment, and a storage unit 102. In this configuration, a camera ideal characteristic value storage unit 102e is added.

  The functional blocks of the first camera 111a to the third camera 111c are as follows. As a representative example of the first camera 111a, functional blocks of the camera will be described. The second camera 111b and the third camera 111c are the same as the first camera 111a.

  The first camera 111a includes a camera direction control unit 111a-1, a camera direction variable mechanism unit 111a-2, a zoom magnification control unit 111a-3, an aperture value control unit 111a-4, and a focus value control unit 111a-5. The camera characteristic value is transmitted to the lens mechanism unit 111a-6, the aperture mechanism unit 111a-7, the CMOS (Complementary Metal Oxide Semiconductor) sensor unit 111a-8, and the content display control device 100e, and content display control is performed. An instruction signal from the device 100e (for example, a “direction control signal” for changing the camera direction, a “zoom control signal” for changing the zoom magnification of the lens or the CMOS sensor, and a lens aperture value for changing “Aperture control signal”, “Focus control signal” for changing the focus value of the lens, etc.) And a signal interface unit 111a-9.

  The image sensor of the camera is not limited to a CMOS sensor, but may be a CCD (Charge Coupled Device).

  The camera direction control unit 111a-1 controls the camera direction variable mechanism unit 111a-2 based on the “direction control signal” received from the content display control device 100e to change the camera direction (the direction of the camera view). . The camera direction variable mechanism 111a-2 includes a mechanism that can freely change the camera direction vertically and horizontally, and its power unit.

  The zoom magnification control unit 111a-3 acquires the zoom magnification from the lens mechanism unit 111a-6 or the CMOS sensor unit 111a-8, transmits the zoom magnification to the content display control device 100e, and receives the “zoom” received from the content display control device 100e. Based on the "control signal", the lens mechanism unit 111a-6 or the CMOS sensor unit 111a-8 is controlled to control the zoom magnification. When the zoom magnification is changed by controlling the lens mechanism unit 111a-6, the optical zoom magnification is changed. When the zoom magnification is changed by controlling the CMOS sensor unit 111a-8, the digital zoom magnification is changed. .

  The lens mechanism unit 111a-6 combines a plurality of optical lenses, and enters the lens mechanism unit 111a-6 to control the focal point of the light irradiated to the CMOS sensor unit 111a-8 and the optical zoom magnification. Is.

  The aperture value control unit 111a-4 acquires the aperture value from the aperture mechanism unit 111a-7, transmits the aperture value to the content display control device 100e, and based on the “aperture control signal” received from the content display control device 100e. The aperture mechanism 111a-7 is controlled to control the aperture of the camera. The aperture mechanism unit 111a-7 is a shutter mechanism that controls the exposure time for the CMOS sensor unit 111a-8.

  The focus value control unit 111a-5 acquires the focus value from the lens mechanism unit 111a-6 and transmits the focus value to the content display control device 100e, and based on the “focus control signal” received from the content display control device 100e. The lens mechanism 111a-6 is controlled to control the focal point of light incident on the camera.

  On the other hand, when an image is transmitted from each of the first camera 111a to the third camera 111c, the camera characteristic value acquisition processing unit 101k of the content display control device 100e receives each characteristic value (for example, a camera) Lens aperture value, camera lens focus value, camera lens or CMOS sensor zoom magnification, etc.).

  The camera characteristic value adjustment processing unit 101l refers to a camera ideal characteristic value table stored in a camera ideal characteristic value storage unit 102e described later, and the characteristic value of each camera acquired by the camera characteristic value acquisition processing unit 101k is obtained. It is determined whether it is within the range of the camera ideal characteristic value.

  Then, the camera characteristic value adjustment processing unit 101l performs a camera direction control unit for each camera when the characteristic value of each camera acquired by the camera characteristic value acquisition processing unit 101k is not within the range of the camera ideal characteristic value. 111 a-1, zoom magnification control unit 111 a-3, aperture value control unit 111 a-4, and focus value control unit 111 a-5 are output, and the characteristic values of each camera are within the camera ideal characteristic value range. Control to fit.

  Next, a camera ideal characteristic value table stored in the camera ideal characteristic value storage unit 102e of the storage unit 102 illustrated in FIG. 16 will be described. FIG. 17 is a diagram illustrating an example of a camera ideal characteristic value table.

As shown in the figure, the camera ideal characteristic value table stores an upper limit and a lower limit of “aperture value”, an upper limit and a lower limit of “focus value”, and an upper limit and a lower limit of “zoom magnification” for each camera. For example, if the “aperture value” of the first camera 111 a is α 1 or more and β 1 or less, it is within the range of ideal values.

  Next, the camera automatic calibration process shown in FIG. 16 and executed by the content display control apparatus 100e will be described. FIG. 18 is a flowchart showing the camera automatic calibration processing procedure.

  First, the camera characteristic value acquisition processing unit 101k determines whether or not a characteristic value of a camera (at least one of the first camera 111a to the third camera 111c) has been acquired (step S171). When it is determined that the characteristic value of the camera has been acquired (Yes at Step S171), the process proceeds to Step S172, and when it is not determined that the characteristic value of the camera has been acquired (No at Step S171), Step S171 is repeated.

  In step S172, the camera characteristic value acquisition processing unit 101k refers to the ideal characteristic value range stored in the camera ideal characteristic value table of the camera ideal characteristic value storage unit 102e, and acquires the camera characteristic value acquired in step S171. Is within the ideal characteristic value range (step S172).

  When it is determined that the camera characteristic value is within the ideal characteristic value range (Yes at step S172), the camera automatic calibration process is terminated, and the camera characteristic value is within the ideal characteristic value range. If not determined (No at step S172), the process proceeds to step S173. A camera whose characteristic value is not determined to be within the ideal characteristic value range is referred to as a control target camera.

  In step S173, the camera characteristic value adjustment processing unit 101l controls the control target camera so that the characteristic value of the control target camera is within the ideal range (for example, “direction control signal”, “zoom” for the control target camera). At least one signal of “control signal”, “aperture control signal”, and “focus control signal” is transmitted). When this process ends, the process moves to step S171.

  By performing the above processing, the camera is automatically calibrated after the system including the content display control device 100e is installed. Therefore, an appropriate characteristic value of the camera can be obtained quickly and accurately without bothering people. It becomes possible to secure. In addition, the camera can be calibrated following changes in the surrounding environment where the camera is installed, and the camera's appropriate characteristic values can be maintained at all times. The level can be kept constant.

  In addition, as shown in the figure explaining the outline of the camera automatic calibration process in FIG. 19, the first camera 111 a to the third camera 111 c are, for example, three locations at the upper right end, the center, and the left end of the content display device 112. Are installed so as to face the content display screen 112a.

  Each camera is installed on a camera pedestal. As shown in FIG. 19, the camera pedestal is configured such that one of two flat plates coupled via a hinge is fixed to the upper surface of the content display device 112, and the camera is fixed to the other to fix the camera via the hinge. The vertical angle can be adjusted by moving the flat plate for fixing the camera, and the horizontal angle can be adjusted by rotating the camera base itself in the horizontal direction by the rotation shaft provided with the rotational power source.

  Hereinafter, Example 6 will be described with reference to FIGS. 20 and 21. FIG. Embodiment 6 shows a content display control device that detects a suspicious abandoned object in the camera field of view and notifies the presence of the abandoned object. In the sixth embodiment, the personal attribute information acquisition processing unit 101a of the control unit 101 of the content display control apparatus 100a according to the first embodiment includes a background image acquisition processing unit 101i and a non-moving object image partial extraction processing unit 101m. Reference numeral 101 denotes an embodiment in which a configuration further including a background image difference acquisition processing unit 101n and a non-moving object detection processing unit 101o is added. In the description of the sixth embodiment, only differences from the first embodiment will be described.

  First, the configuration of the content display control apparatus according to the sixth embodiment will be described. FIG. 20 is a functional block diagram of the configuration of the content display control apparatus according to the sixth embodiment. The content display control device 100f according to the sixth embodiment includes a background image acquisition processing unit 101i, a non-moving object image partial extraction processing unit, a personal attribute information acquisition processing unit 101a of the control unit 101 of the content display control device 100a according to the first example. 101m, the control unit 101 further includes a background image difference acquisition processing unit 101n and a non-moving object detection processing unit 101o, and the background image storage unit 102f is added to the storage unit 102.

  The background image acquisition processing unit 101i determines whether or not a passerby is included in the images captured by the first camera 111a to the third camera 111c, and determines that no passerby is included in the captured image. In this case, the image is acquired as a background image and stored in the background image storage unit 102f.

  Further, the background image acquisition processing unit 101i compares the background image stored in the background image storage unit 102f with the pixel value of the newly acquired background image, and the difference in pixel value or the change amount of the pixel value is a predetermined value. In the case described above, the background image stored in the background image storage unit 102f is updated with the newly acquired background image.

  The non-moving object image partial extraction processing unit 101m extracts a non-moving object included in an image captured by at least one of the first camera 111a to the third camera 111c. The non-moving object is extracted by recognizing and detecting the same image portion that is not included in the difference image of successive image frames as a non-moving object (non-moving object) for a predetermined time or longer.

  Then, the background image difference acquisition processing unit 101n acquires a difference image between the image from which the non-moving object is extracted and the background image stored in the background image storage unit 102f. The non-moving object detection processing unit 101o detects the non-moving object as an abandoned object when the image portion extracted as the non-moving object is acquired as a difference by being included in the difference image with the background image, As a suspicious notification item, the notification terminal device 113 is instructed to perform a notification operation (display of a warning screen, output of a warning, etc.) via the communication interface unit 103.

  Next, the abandoned object detection process executed by the content display control apparatus 100f shown in FIG. 20 will be described. FIG. 21 is a flowchart showing the abandoned object detection processing procedure. As shown in the figure, first, the background image acquisition processing unit 101i determines whether or not a background image has been acquired (step S181).

  When it is determined that a background image has been acquired (Yes at Step S181), the process proceeds to Step S182. When it is not determined that a background image has been acquired (No at Step S181), Step S181 is repeated.

  In step S182, the non-moving object image partial extraction processing unit 101m reads an image frame 1 that is an image frame before the image frame that is continuous before and after. Subsequently, the non-moving object image partial extraction processing unit 101m reads the image frame 2 that is the image frame after the image frame that is continuous before and after (step S183).

  Then, the non-moving object image partial extraction processing unit 101m creates a difference image between the image frame 1 and the image frame 2 (step S184). Then, the non-moving object image portion extraction processing unit 101m extracts a non-moving object image portion from the difference image created in step S184 (step S185).

  Subsequently, the background image difference acquisition processing unit 101n calculates a difference image between the non-moving object image portion extracted in step S185 and the background image stored in the background image storage unit 102f (step S186).

  Subsequently, the non-moving object detection processing unit 101o determines whether there is an abandoned object candidate in the difference image calculated in Step S186 (Step S187). If it is determined that there is an abandoned object candidate (Yes at Step S187), the process proceeds to Step S189, and if it is not determined that there is an abandoned object candidate (No at Step S187), the process proceeds to Step S188.

  In step S188, the non-moving object detection processing unit 101o ends the time measurement by the timer when the time measurement is performed by the timer. When this process ends, the process moves to step S181.

  On the other hand, in step S189, the non-moving object detection processing unit 101o starts counting with the timer when not counting with the timer. Subsequently, the non-moving object detection processing unit 101o determines whether or not the time measured by the timer has exceeded a predetermined time (step S190).

  When it is determined that the time measured by the timer has exceeded the predetermined time (Yes at Step S190), the process proceeds to Step S191, and when it is not determined that the time measured by the timer has exceeded the predetermined time (No at Step S190), Control goes to step S182.

  In step S191, the non-moving object detection processing unit 101o detects the non-moving object determined as the candidate for the abandoned object in step S187 as the abandoned object. Then, the non-moving object detection processing unit 101o instructs the notification terminal device 113 to notify the abandoned object detection (step S192).

  By performing the above processing, it is possible to quickly detect and deal with suspicious notifications existing in the camera field of view, so that it is possible to maintain the security of the surroundings where the content display device 112 is installed.

  As mentioned above, although the Example of this invention was described, this invention is not limited to this, In the range of the technical idea described in the claim, even if it implements in a various different Example, it is. It ’s good. Moreover, the effect described in the Example is not limited to this.

  Specifically, as shown in the diagram showing an installation example (part 1) of the camera, the content display control device, and the content display device in FIG. 22, in order to image passers-by passing through the front of the content display screen 112a traffic A plurality of cameras (for example, cameras 111d to 111g) are installed. There is a limit to the field of view of one camera, and it is impossible to photograph all passers-by passing through the front of the content display screen 112a.

  However, by sharing the camera field of view with a plurality of cameras and combining all the camera fields of view, it covers a wide area in front of the content display screen 112a traffic, and more quickly, an unspecified number of passersby Can be recognized. As the number of cameras increases, it becomes possible to finely acquire “personal attribute information” of an unspecified number of passers-by passing through the front area of the content display screen 112a.

  Also, when recognizing a large number of unspecified passers-by with a plurality of cameras, the passers-by's movements (for example, movements) are identified by uniquely identifying and tracking the passers-by for which “personal attribute information” has been acquired. Line, walking direction, repeated pattern of walking and stopping, etc.).

  Since the process of acquiring “personal attribute information” of an unspecified number of passersby is heavy, the load of the function of the content display control apparatus 100 so as to perform the process of acquiring “personal attribute information” for each camera. Dispersion may be achieved. In this case, the content display control apparatus 100 receives “personal attribute information” of an unspecified number of passers-by acquired for each camera, and performs only subsequent processes after the process of determining “macro attribute information”. It becomes. By performing load distribution in this way, it is possible to increase the processing speed of recognition processing for an unspecified number of passersby and to display appropriate content quickly.

  23, the camera 111 and the content display control device 100 are remotely connected via a wired or wireless network, as shown in the diagram showing an installation example (part 2) of the camera, the content display control device, and the content display device. It may be connected. The camera 111 may be arranged at a position away from the content display screen 112a.

  In this way, an unspecified number of passersby are imaged at a location away from the content display screen 112a, “personal attribute information” is acquired quickly, “macro attribute information” is determined, and appropriate content is acquired as content. It can be displayed on the display screen 112a.

  For example, even if a passerby is not watching the content, there is a case where the user wants to show or watch the content. In this case, when the installation example as shown in FIG. 23 is adopted, the content display screen 112a exists ahead of the passerby walking in the passage, so that the camera 111 is approaching from a farther side. The content that matches the "macro attribute information" is obtained by acquiring "personal attribute information" in advance before the passerby approaches the content display screen 112a. Therefore, it is possible to display the content in a timely manner at the timing when the display of the content display screen 112a enters the passerby's field of view.

  The camera 111 may be arranged at the lower part of the content display screen 112a as shown in FIG. 24-1, or may be arranged at the upper part of the content display screen 112a as shown in FIG. 24-2. As shown in FIG. 5, it may be arranged on the side of the content display screen 112a.

  Further, as shown in FIG. 24-1 to FIG. 24-3, the content display screen 112a is divided into a plurality of screens according to the “scenario condition” or “content display condition” described in the first embodiment. The content may be displayed at the same time.

  The content display control apparatus according to the first to sixth embodiments described above is not limited to indoor / outdoor advertisements and traffic advertisements, but for example, in a facility where an unspecified number of passersby such as a store entrance / exit of a store and a store / trait It is installed and can be used for flow analysis and attribute analysis of passersby. In addition, the “personal attribute information” and “macro attribute information” of a passerby in a certain passage or facility and the movement (traffic line) of the passerby can be grasped without man-hours for marketing and sales promotion in the facility. It can also be used.

  In addition, among the processes described in the above embodiment, all or part of the processes described as being automatically performed can be manually performed, or the processes described as being manually performed can be performed. All or a part can be automatically performed by a known method. In addition, the processing procedure, control procedure, specific name, information including various data and parameters shown in the above embodiment can be arbitrarily changed unless otherwise specified.

  Each component of each illustrated device is functionally conceptual and does not necessarily need to be physically configured as illustrated. In other words, the specific form of distribution / integration of each device is not limited to that shown in the figure, and all or a part thereof may be functionally or physically distributed or arbitrarily distributed in arbitrary units according to various loads or usage conditions. Can be integrated and configured.

  Furthermore, each or all of the processing functions performed in each device are entirely or partially a CPU (Central Processing Unit) (or a microcomputer such as an MPU (Micro Processing Unit) or MCU (Micro Controller Unit)) and It may be realized by a program that is analyzed and executed by the CPU (or a microcomputer such as MPU or MCU), or may be realized as hardware by wired logic.

  The present invention provides a display device capable of switching and displaying information for a large number of passers-by who pass through passages, etc., and transmitting information according to the macro according to the contents according to the attributes of the entire unspecified number of passers-by. This is useful when it is desired to select and display so that the effect of is improved.

It is a figure for demonstrating the outline | summary and characteristic of a present Example. It is a functional block diagram which shows the structure of the content display control apparatus concerning Example 1. FIG. It is a figure which shows the example of a passer-by personal attribute information storage table. It is a figure which shows the example of a content display condition table. It is a figure which shows the example of a content storage table. It is a flowchart which shows a content display control processing procedure. It is a flowchart which shows the procedure of a content display restriction process and a display restriction cancellation process. It is a functional block diagram which shows the structure of the content display control apparatus concerning Example 2. FIG. It is a flowchart which shows a content recognition rate calculation process procedure. FIG. 10 is a functional block diagram illustrating a configuration of a content display control apparatus according to a third embodiment. It is a flowchart which shows a face detection process procedure. It is a figure explaining the outline | summary of the conventional face detection process. It is a figure explaining the outline | summary of the face detection process concerning Example 3. FIG. FIG. 10 is a functional block diagram illustrating a configuration of a content display control apparatus according to a fourth embodiment. It is a flowchart which shows a passerby recognition module update process sequence. It is a figure which shows the outline | summary of the other Example of a passerby recognition module update process. FIG. 10 is a functional block diagram illustrating a configuration of a content display control apparatus according to a fifth embodiment. It is a figure which shows the example of a camera ideal characteristic value table. It is a flowchart which shows a camera automatic calibration process sequence. It is a figure explaining the outline | summary of a camera automatic calibration process. It is a functional block diagram which shows the structure of the content display control apparatus concerning Example 6. FIG. It is a flowchart which shows an abandoned object detection process procedure. It is a figure which shows the example of installation (the 1) of a camera, a content display control apparatus, and a content display apparatus. It is a figure which shows the installation example (the 2) of a camera, a content display control apparatus, and a content display apparatus. It is a figure which shows the example of installation of a camera and a content display apparatus (the 1). It is a figure which shows the installation example (the 2) of a camera and a content display apparatus. It is a figure which shows the installation example (the 3) of a camera and a content display apparatus.

Explanation of symbols

100, 100a, 100b, 100c, 100d, 100e, 100f Content display control device 101 Control unit 101a Personal attribute information acquisition processing unit 101b Macro attribute information determination processing unit 101c Content selection processing unit 101d Content display control processing unit 101e Display screen viewing traffic Number calculation processing unit 101f Content visual recognition rate calculation processing unit 101g Passerby image acquisition processing unit 101h Moving body image area identification processing unit 101i Background image acquisition processing unit 101j Passerby recognition module update processing unit 101k Camera characteristic value acquisition processing unit 101l Camera characteristic value Adjustment processing unit 101m Non-moving object image partial extraction processing unit 101n Background image difference acquisition processing unit 101o Non-moving object detection processing unit 102 Storage unit 102a Passer-by identification module storage unit 102b Passer-person personal attribute information storage D
102c Content display condition storage unit 102d Content DB
102e Camera ideal characteristic value storage unit 102f Background image storage unit 103 Communication interface unit 111, 111d, 111e, 111f, 111g Camera 111a First camera 111a-1 Camera direction control unit 111a-2 Camera direction variable mechanism unit 111a-3 Zoom magnification Control unit 111a-4 Aperture value control unit 111a-5 Focus value control unit 111a-6 Lens mechanism unit 111a-7 Aperture mechanism unit 111a-8 CMOS sensor unit 111a-9 Communication interface unit 111b Second camera 111c Third camera 112 Content Display device 112a Content display screen 113 Notification terminal device

Claims (12)

  1. Content display control for controlling display of content suitable for a passerby on the display screen when the passerby passes through the front of the display screen of a display device arranged at a position visible by an unspecified number of passersby A device,
    Personal attribute information acquisition means for acquiring individual attribute information of each of a plurality of passers-by passing through the front of the display screen;
    Macro attribute information determining means for determining macro attribute information of the plurality of passers-by from the individual attribute information of the plurality of passers-by acquired by the personal attribute information acquiring means;
    Content storage means for storing a plurality of contents to be displayed on the display screen in association with the macro attribute information;
    Content display control comprising: content selection means for searching the content storage means using the macro attribute information determined by the macro attribute information determination means and selecting content to be displayed on the display screen. apparatus.
  2. Personal attribute information storage means for storing individual attribute information of each of the plurality of passers acquired by the personal attribute information acquisition means for each passer-by,
    The macro attribute information determining means determines macro attribute information of all passersby from the personal attribute information of all passersby stored for a predetermined time by the personal attribute information storage means. Item 2. A content display control device according to Item 1.
  3.   The personal attribute information acquisition means detects the face of each of the plurality of passersby, attributes for each face, size for each face, direction to the display screen for each face, and the face for each face The content display control apparatus according to claim 1, wherein a time when the direction to the display screen is detected at a predetermined angle or more is acquired as the personal attribute information.
  4. Of the personal attribute information for each passerby stored for the predetermined time by the personal attribute information storage means, the display screen is visually recognized when the direction of the face to the display screen is a predetermined angle or more. A display screen viewing traffic calculating means for calculating a display screen viewing traffic count that can be considered;
    Content visibility rate obtained by dividing the display screen viewing number of people calculated by the display screen viewing number of people by the total number of passers whose personal attribute information is stored in the personal attribute information storage unit for the predetermined time And a content view rate calculation means for calculating
    The content selection means selects content to be displayed on the display screen further based on at least one of the number of passers visually recognized by the display screen, the total number of passers-by, or the content visibility rate. The content display control device described in 1.
  5. Display condition storage means for storing a predetermined condition for controlling display of the content on the display screen or a content display condition including a combination of the predetermined conditions;
    Further comprising: condition satisfaction determination means for determining whether or not the content display conditions stored by the display condition storage means are satisfied,
    The content selection means should display on the display screen using the macro attribute information determined by the macro attribute information determination means when the condition satisfaction determination means determines that the content display condition is satisfied. The content display control apparatus according to claim 1, wherein the content is selected.
  6.   The personal attribute information acquisition means detects passersby image acquisition means for acquiring images of the plurality of passers-by, and moving objects included in the images of the passersby acquired by the passerby image acquisition means. Personal image information of a plurality of passersby passing through the front of the display screen only from the image area specified by the moving image area specifying means. The content display control device according to claim 1, wherein each of the content display control devices is acquired.
  7. The personal attribute information acquisition unit acquires a background image acquisition unit that acquires the image as a background image when there is no passerby passing through the front of the display screen in the image acquired by the passerby image acquisition unit. When,
    The personal attribute information acquisition / updating means for updating the personal attribute information acquisition means based on the background image acquired by the background image acquisition means. Content display control device.
  8. Content display control means for controlling display of the content selected by the content selection means on the display screen,
    The content display control means includes
    When the passerby image acquisition means does not acquire the passerby image passing through the front of the display screen for a predetermined time, the display of content on the display screen is limited,
    The content display control apparatus according to claim 6 or 7, wherein the restriction is canceled when an image of a passerby passing through the front of the display screen is acquired by the passer-by image acquisition means.
  9. Ideal characteristic value storage means for storing in advance a range of ideal characteristic values of the image acquired by the passerby image acquisition means;
    A passer-by image that controls the passer-by image acquisition means so that the characteristic value of the image acquired by the passer-by image acquisition means falls within the range of the ideal characteristic value stored in the ideal characteristic value storage means. The content display control device according to claim 6, further comprising: an acquisition control unit.
  10. Content display control for controlling display of content suitable for a passerby on the display screen when the passerby passes through the front of the display screen of a display device arranged at a position visible by an unspecified number of passersby A device,
    A passerby image acquisition means for acquiring an image of the passerby;
    A non-moving object image part extracting unit that extracts an image part of a non-moving object that is not included in the difference information of the images of successive frames acquired by the passerby image acquiring unit;
    A difference image between the non-moving body image portion extracted by the non-moving body image portion extraction unit and a background image acquired by the passer-by image acquisition unit and not including a passer-by image is acquired. Image difference acquisition means;
    Non-moving object detection means for detecting a non-moving object included in the difference image acquired by the image difference acquisition means;
    Informing means for informing that the non-moving object exists in the field of view of the passer-by image acquisition means when the non-moving object is detected by the passer-by image acquisition means for a predetermined time. Display control device.
  11. When a passer passes the front of the display screen of a display device arranged at a position that can be viewed by an unspecified number of passers-by, a process for controlling display of content suitable for the passer-by on the display screen A content display control method performed by a display control device,
    A personal attribute information acquisition step of acquiring individual attribute information of each of a plurality of passers-by passing through the front of the display screen;
    A macro attribute information determining step for determining macro attribute information of the plurality of passers-by from the individual attribute information of the plurality of passers-by acquired by the personal attribute information acquiring step;
    Using the macro attribute information determined by the macro attribute information determination step, search a content database storing a plurality of contents to be displayed on the display screen in association with the macro attribute information, A content selection control method comprising: a content selection step of selecting content to be displayed on the display screen.
  12. When the passer-by passes through the front of the display screen of the display device arranged at a position visible by an unspecified number of passers-by, a computer performs processing for controlling display of content suitable for the passer-by on the display screen A content display control program to be executed by an apparatus,
    A personal attribute information acquisition procedure for acquiring individual attribute information of each of a plurality of passers-by passing through the front of the display screen;
    A macro attribute information determination procedure for determining macro attribute information of the plurality of passers-by from the individual attribute information of the plurality of passers-by acquired by the personal attribute information acquisition procedure;
    Using the macro attribute information determined by the macro attribute information determination procedure, search a content database storing a plurality of contents to be displayed on the display screen in association with the macro attribute information, A content display control program that causes the computer device to execute a content selection procedure for selecting content to be displayed on the display screen.
JP2007318778A 2007-12-10 2007-12-10 Contents display control device, contents display control method, and contents display control program Pending JP2009139857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007318778A JP2009139857A (en) 2007-12-10 2007-12-10 Contents display control device, contents display control method, and contents display control program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007318778A JP2009139857A (en) 2007-12-10 2007-12-10 Contents display control device, contents display control method, and contents display control program

Publications (1)

Publication Number Publication Date
JP2009139857A true JP2009139857A (en) 2009-06-25

Family

ID=40870483

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007318778A Pending JP2009139857A (en) 2007-12-10 2007-12-10 Contents display control device, contents display control method, and contents display control program

Country Status (1)

Country Link
JP (1) JP2009139857A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011008571A (en) * 2009-06-26 2011-01-13 Shunkosha:Kk Passer-by fluidity data generating device, content distribution controlling device, passer-by fluidity data generating method, and content distribution controlling method
JP2011232876A (en) * 2010-04-26 2011-11-17 Nippon Telegr & Teleph Corp <Ntt> Content attention degree calculation device, content attention degree calculation method, and content attention degree calculation program
JP2011248548A (en) * 2010-05-25 2011-12-08 Fujitsu Ltd Content determination program and content determination device
JP2012078722A (en) * 2010-10-05 2012-04-19 Mitsubishi Electric Corp Information providing system
WO2012132285A1 (en) * 2011-03-25 2012-10-04 日本電気株式会社 Mobile body characteristic estimation system, mobile body characteristic estimation method, and program
JP2013109051A (en) * 2011-11-18 2013-06-06 Glory Ltd Electronic information providing system and electronic information providing method
JP2013140196A (en) * 2011-12-28 2013-07-18 Toshiba Tec Corp Information display device and program
JP2013152277A (en) * 2012-01-24 2013-08-08 Toshiba Tec Corp Apparatus, program, and system for providing information
JP2015528157A (en) * 2012-06-29 2015-09-24 インテル コーポレイション Method and apparatus for selecting advertisements for display on a digital sign

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01245395A (en) * 1988-03-28 1989-09-29 Toshiba Corp Image monitoring system
JP2002082642A (en) * 2000-09-08 2002-03-22 Japan Servo Co Ltd Advertising method and advertising device
JP2003111007A (en) * 2001-09-27 2003-04-11 Fuji Photo Film Co Ltd Image processing system, imaging apparatus, image processor, image processing method and program
JP2003255922A (en) * 2002-02-27 2003-09-10 Toshiba Corp Display device, and device and method for terminal processing
JP2004054376A (en) * 2002-07-17 2004-02-19 Hitoshi Hongo Method and device for estimating group attribute
JP2004227158A (en) * 2003-01-21 2004-08-12 Omron Corp Information providing device and information providing method
JP2004347952A (en) * 2003-05-23 2004-12-09 Clarion Co Ltd Advertisement distribution system
JP2005004656A (en) * 2003-06-13 2005-01-06 Canon Inc Image processor and its method
JP2007181070A (en) * 2005-12-28 2007-07-12 Shunkosha:Kk Apparatus and method for evaluating attention paid to contents
JP2007287093A (en) * 2006-04-20 2007-11-01 Fujitsu Frontech Ltd Program, method and device for detecting mobile object

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01245395A (en) * 1988-03-28 1989-09-29 Toshiba Corp Image monitoring system
JP2002082642A (en) * 2000-09-08 2002-03-22 Japan Servo Co Ltd Advertising method and advertising device
JP2003111007A (en) * 2001-09-27 2003-04-11 Fuji Photo Film Co Ltd Image processing system, imaging apparatus, image processor, image processing method and program
JP2003255922A (en) * 2002-02-27 2003-09-10 Toshiba Corp Display device, and device and method for terminal processing
JP2004054376A (en) * 2002-07-17 2004-02-19 Hitoshi Hongo Method and device for estimating group attribute
JP2004227158A (en) * 2003-01-21 2004-08-12 Omron Corp Information providing device and information providing method
JP2004347952A (en) * 2003-05-23 2004-12-09 Clarion Co Ltd Advertisement distribution system
JP2005004656A (en) * 2003-06-13 2005-01-06 Canon Inc Image processor and its method
JP2007181070A (en) * 2005-12-28 2007-07-12 Shunkosha:Kk Apparatus and method for evaluating attention paid to contents
JP2007287093A (en) * 2006-04-20 2007-11-01 Fujitsu Frontech Ltd Program, method and device for detecting mobile object

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011008571A (en) * 2009-06-26 2011-01-13 Shunkosha:Kk Passer-by fluidity data generating device, content distribution controlling device, passer-by fluidity data generating method, and content distribution controlling method
JP2011232876A (en) * 2010-04-26 2011-11-17 Nippon Telegr & Teleph Corp <Ntt> Content attention degree calculation device, content attention degree calculation method, and content attention degree calculation program
JP2011248548A (en) * 2010-05-25 2011-12-08 Fujitsu Ltd Content determination program and content determination device
US8724845B2 (en) 2010-05-25 2014-05-13 Fujitsu Limited Content determination program and content determination device
JP2012078722A (en) * 2010-10-05 2012-04-19 Mitsubishi Electric Corp Information providing system
WO2012132285A1 (en) * 2011-03-25 2012-10-04 日本電気株式会社 Mobile body characteristic estimation system, mobile body characteristic estimation method, and program
JP5907162B2 (en) * 2011-03-25 2016-04-20 日本電気株式会社 Mobile object attribute estimation system, mobile object attribute estimation method, and program
JP2013109051A (en) * 2011-11-18 2013-06-06 Glory Ltd Electronic information providing system and electronic information providing method
JP2013140196A (en) * 2011-12-28 2013-07-18 Toshiba Tec Corp Information display device and program
JP2013152277A (en) * 2012-01-24 2013-08-08 Toshiba Tec Corp Apparatus, program, and system for providing information
JP2015528157A (en) * 2012-06-29 2015-09-24 インテル コーポレイション Method and apparatus for selecting advertisements for display on a digital sign

Similar Documents

Publication Publication Date Title
US10008003B2 (en) Simulating an infrared emitter array in a video monitoring camera to construct a lookup table for depth determination
US10546186B2 (en) Object tracking and best shot detection system
US20170245939A1 (en) Management system for skin condition measurement analysis information and management method for skin condition measurement analysis information
US10341544B2 (en) Determining a matching score between users of wearable camera systems
CN104813339B (en) Methods, devices and systems for detecting objects in a video
JP5597781B1 (en) Residence status analysis apparatus, residence status analysis system, and residence status analysis method
US10389986B2 (en) Using a scene illuminating infrared emitter array in a video monitoring camera for depth determination
KR101383238B1 (en) Systems and methods for analytic data gathering from image providers at an event or geographic location
US9898663B2 (en) Collaboration facilitator for wearable devices
Shih A robust occupancy detection and tracking algorithm for the automatic monitoring and commissioning of a building
US10417499B2 (en) Machine learning models for identifying sports teams depicted in image or video data
CN101778260B (en) Method and system for monitoring and managing videos on basis of structured description
JP2014053029A (en) Processing method, computer program, and processor
CN101945224B (en) Thermography methods
CN102737373B (en) Visual image label, infrared image label and infrared image link
EP3133810A1 (en) Video identification and analytical recognition system
US10389954B2 (en) Using images of a monitored scene to identify windows
Benezeth et al. Towards a sensor for detecting human presence and characterizing activity
KR101337060B1 (en) Imaging processing device and imaging processing method
CN101209207B (en) Eyelid detecting apparatus, eyelid detecting method
KR101390591B1 (en) Facial image search system and facial image search method
JP3521637B2 (en) Passenger number measurement device and entrance / exit number management system using the same
KR20170035922A (en) Techniques for automatic real-time calculation of user wait times
US8081158B2 (en) Intelligent display screen which interactively selects content to be displayed based on surroundings
JP5994397B2 (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20101209

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20131029