CN110942055A - State identification method, device and equipment for display area and storage medium - Google Patents

State identification method, device and equipment for display area and storage medium Download PDF

Info

Publication number
CN110942055A
CN110942055A CN201911416336.5A CN201911416336A CN110942055A CN 110942055 A CN110942055 A CN 110942055A CN 201911416336 A CN201911416336 A CN 201911416336A CN 110942055 A CN110942055 A CN 110942055A
Authority
CN
China
Prior art keywords
user
display area
display
crowd
passenger flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911416336.5A
Other languages
Chinese (zh)
Inventor
孙贺然
李佳宁
程玉文
任小兵
贾存迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201911416336.5A priority Critical patent/CN110942055A/en
Publication of CN110942055A publication Critical patent/CN110942055A/en
Priority to PCT/CN2020/105284 priority patent/WO2021135196A1/en
Priority to JP2021528437A priority patent/JP2022519149A/en
Priority to KR1020217015862A priority patent/KR20210088600A/en
Priority to TW109129710A priority patent/TW202127348A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The disclosure provides a method, a device, equipment and a storage medium for identifying the state of a display area, which are used for acquiring a video picture acquired by camera equipment deployed in the display area; determining passenger flow data of the display area in a first time period based on the collected video picture, wherein the passenger flow data comprises at least two of the staying time of each user appearing in the display area, the concerned time of each user, the attribute information of each user, the number of passengers and the number of passengers; determining crowd state data of the display area in the first time period by using the passenger flow data; and controlling the display state of the display area in the display picture by utilizing the crowd state data.

Description

State identification method, device and equipment for display area and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for identifying a state of a display area.
Background
In an exhibition site such as a shopping store such as a supermarket, an exhibition hall, etc., various items are usually displayed in each display area for a user to purchase or visit. The level of interest a user has in items in various display areas is one of the concerns of many stores or exhibitions. However, due to the people stream characteristics of stores or exhibitions, people are crowded and the movement track of the user is irregular, so that the analysis of the interest degree of the user in the items in each display area is more difficult.
Disclosure of Invention
In view of the above, the present disclosure provides at least one status identification scheme for a display area.
In a first aspect, the present disclosure provides a method for identifying a state of a display area, including:
acquiring a video picture acquired by camera equipment deployed in a display area;
determining passenger flow data of the display area in a first time period based on the collected video picture, wherein the passenger flow data comprises at least two of the staying time of each user appearing in the display area, the concerned time of each user, the attribute information of each user, the number of passengers and the number of passengers;
determining crowd state data of the display area in the first time period by using the passenger flow data;
and controlling the display state of the display area in the display picture by utilizing the crowd state data.
In the method, the passenger flow data can be determined based on the collected video picture, the crowd state data of the display area in the first time period is determined by utilizing the passenger flow data, and the display state of the display area in the display picture is controlled by utilizing the crowd state data. The display state of the display area can be used for prompting crowd state data in the display area, so that dynamic monitoring on the crowd state of the display area can be realized, and concerned crowds in the display area can be analyzed.
In one possible embodiment, the crowd status data includes a crowd heat rating;
the utilizing the crowd state data to control the display state of the display area in the display picture comprises the following steps:
and controlling the layer of the display area corresponding to the display area in the display picture to present a color grade matched with the crowd heat grade.
Different crowd popularity grades are displayed through different color grades, the display mode of the display state of the display area is enriched, and the crowd popularity information of different display areas can be displayed more visually.
In one possible embodiment, the population heat rating is determined according to the following:
and determining a crowd heat value based on the weight coefficients respectively corresponding to at least two kinds of passenger flow data in the passenger flow data and the numerical values of the at least two kinds of passenger flow data, and determining the crowd heat level corresponding to the range of the crowd heat value.
The crowd heat degree value determined by the mode is combined with the characteristics of at least two passenger flow data, so that the determined crowd heat degree value can embody the passenger flow characteristics of the display area, and the determined crowd heat degree grade is more accurate.
In one possible embodiment, in a case that the passenger flow data includes the number of people in the passenger flow and the number of people in the passenger flow, the determining, by using the passenger flow data, the crowd status data of the display area in the first time period includes:
acquiring the number of passenger flows and the number of passenger flows under a plurality of time nodes in the first time period by using the number of passenger flows and the number of passenger flows;
the utilizing the crowd state data to control the display state of the display area in the display picture comprises the following steps:
and displaying a comparison result between the variation trend of the number of the passenger flow persons in the display area and the variation trend of the number of the passenger flow persons in the display area in the first time period in the display picture based on the number of the passenger flow persons and the number of the passenger flow persons in a plurality of time nodes of the first time period.
In the above embodiment, the comparison result between the variation trend of the number of people in the passenger flow and the variation trend of the number of times of passenger flow in the display area can be displayed through the display screen, and by this means, the variation situation of the number of people in the passenger flow and the number of times of passenger flow in the display area can be displayed more intuitively.
In one possible embodiment, the determining crowd status data of the presentation area during the first time period using the passenger flow data includes:
determining crowd distribution data under different attributes based on attribute information of each user, wherein the crowd distribution data comprises distribution data under at least one attribute of age, gender, charm value and emotion;
the utilizing the crowd state data to control the display state of the display area in the display picture comprises the following steps:
and controlling the display picture to respectively display the crowd distribution diagram under each attribute by using the crowd distribution data under different attributes.
In the above embodiment, the crowd distribution map under each attribute may be displayed according to the crowd distribution data under different attributes, so as to display a more detailed crowd distribution situation for each display area.
In a possible implementation manner, in a case that the passenger flow data includes a length of attention of each user and attribute information of each user, the determining, by using the passenger flow data, crowd state data of the presentation area in the first time period includes:
based on the attribute information of each user appearing in the display area, dividing each user into crowd sets with different attributes;
acquiring the attention duration of each user in the display area in the crowd set with each type of attributes;
determining the attention total duration of the crowd set with each type of attribute based on the attention duration corresponding to each user in the crowd set with each type of attribute;
the utilizing the crowd state data to control the display state of the display area in the display picture comprises the following steps:
and controlling the attention state effect graph of the crowd set with each type of attribute to be displayed in the display picture based on the attention total time length of the crowd set with each type of attribute.
Based on the above embodiment, the attention state effect graph of the crowd set of each type of attribute presented in the display screen can be used for indicating the attention degree of the crowd of different types of attributes to the display area.
In one possible embodiment, the users in the set of people for each category of attributes meet at least two attributes of age, gender, charm, mood.
In one possible embodiment, the focus state effect graph is displayed by a specific graph; the total attention duration of the crowd set with each type of attribute is different, and the display effect of the corresponding specific image is also different.
In one possible embodiment, the length of attention is determined in accordance with the following:
for any user, identifying face orientation data of the user in the collected video picture;
recording the time when the user pays attention to the display object in the display area for the first time under the condition that the face orientation data indicates that the user pays attention to the display object;
and determining the attention duration of the user paying attention to the display object in the first time period based on the recorded time of the user paying attention to the display object for the first time, wherein the attention duration does not exceed the first time period.
The face orientation data comprises a pitch angle and a yaw angle of the face;
the detecting that the face orientation data indicates that the user is interested in a presentation object of the presentation area comprises:
determining that the face orientation data indicates that the user is interested in a display object of the display area if the pitch angle is within a first angular range and the yaw angle is within a second angular range.
In the above embodiment, the attention duration of the user paying attention to the displayed object in the first time period may be determined only according to the recorded time of the user paying attention to the displayed object for the first time, so that frequent statistics of the attention duration caused by too short interval between times of the user paying attention to the displayed object for the first time may be avoided, and the manner of calculating the attention duration may be more reasonable.
In one possible embodiment, the number of people in the presentation area during the first time period is determined according to the following:
determining the total occurrence number of each user in the display area in the first time period;
recording the starting time of each time the user enters the display area for any user; if the recorded time length of the interval between the two times of starting time exceeds a first time length, determining that the user appears once; the first duration does not exceed a total duration of the first time period.
In the above embodiment, the user is determined to appear once when the time length of the interval between the two recorded start times exceeds the first time length, and by this means, repeated statistics of the number of times of appearance of the user in a short time can be avoided, and the accuracy of passenger flow people statistics is improved.
In one possible embodiment, the length of the user's stay in the presentation area during the first period of time is determined according to the following:
determining the total length of stay of each user in each occurrence in the first time period;
and aiming at any user, under the condition that the time length of the interval between the starting times of the user entering the display area twice is determined to exceed a first time length, the time length of the interval between the starting times of the user entering the display area twice is taken as the time length of the stay of the user when the user appears once.
In the above embodiment, when the recorded duration of the interval between the start times of entering the display area twice exceeds the first duration, the duration of the interval between the start times of entering the display area twice can be used as the duration of the stay when the user appears once.
In one possible embodiment, before controlling the display state of the display area in the display screen by using the crowd status data, the method further includes:
and acquiring the trigger operation aiming at the display area in the display picture.
In a possible implementation manner, the acquiring of the trigger operation for the display area in the display screen includes:
controlling the display interface to display a gesture prompt box, wherein the gesture prompt box comprises gesture description content;
detecting a user gesture in a video picture acquired by the camera equipment;
and under the condition that the user gesture is consistent with the gesture recorded by the gesture description content, confirming to acquire the trigger operation aiming at the display area in the display picture.
In the above embodiment, when the trigger operation for the display area in the display picture is acquired, the trigger mode of the trigger operation can be enriched by acquiring the user gesture in the video picture, and the interaction between the user and the device is increased.
In a second aspect, an embodiment of the present disclosure provides a device for identifying a state of a display area, including:
the acquisition module is used for acquiring video pictures acquired by the camera equipment deployed in the display area;
the first determining module is used for determining passenger flow data of the display area in a first time period based on the collected video pictures, wherein the passenger flow data comprises at least two of the stay time of each user appearing in the display area, the attention time of each user, the attribute information of each user, the number of passenger flow people and the number of passenger flow people;
the second determining module is used for determining the crowd state data of the display area in the first time period by using the passenger flow data;
and the control module is used for controlling the display state of the display area in the display picture by utilizing the crowd state data.
In one possible embodiment, the crowd status data includes a crowd heat rating;
the control module is used for controlling the display state of the display area in the display picture by utilizing the crowd state data:
and controlling the layer of the display area corresponding to the display area in the display picture to present a color grade matched with the crowd heat grade.
In one possible embodiment, the second determining module is further configured to determine the crowd heat rating according to the following:
and determining a crowd heat value based on the weight coefficients respectively corresponding to at least two kinds of passenger flow data in the passenger flow data and the numerical values of the at least two kinds of passenger flow data, and determining the crowd heat level corresponding to the range of the crowd heat value.
In a possible implementation manner, in a case that the passenger flow data includes the number of people in the passenger flow and the number of people in the passenger flow, the second determining module, when determining the crowd status data of the display area in the first time period by using the passenger flow data, is configured to:
acquiring the number of passenger flows and the number of passenger flows under a plurality of time nodes in the first time period by using the number of passenger flows and the number of passenger flows;
the control module is used for controlling the display state of the display area in the display picture by utilizing the crowd state data:
and displaying a comparison result between the variation trend of the number of the passenger flow persons in the display area and the variation trend of the number of the passenger flow persons in the display area in the first time period in the display picture based on the number of the passenger flow persons and the number of the passenger flow persons in a plurality of time nodes of the first time period.
In one possible embodiment, the second determining module, when determining the crowd status data of the display area in the first time period by using the passenger flow data, is configured to:
determining crowd distribution data under different attributes based on attribute information of each user, wherein the crowd distribution data comprises distribution data under at least one attribute of age, gender, charm value and emotion;
the control module is used for controlling the display state of the display area in the display picture by utilizing the crowd state data:
and controlling the display picture to respectively display the crowd distribution diagram under each attribute by using the crowd distribution data under different attributes.
In a possible implementation manner, in a case that the passenger flow data includes a duration of interest of each user and attribute information of each user, the second determining module, when determining the crowd status data of the display area in the first time period by using the passenger flow data, is configured to:
based on the attribute information of each user appearing in the display area, dividing each user into crowd sets with different attributes;
acquiring the attention duration of each user in the display area in the crowd set with each type of attributes;
determining the attention total duration of the crowd set with each type of attribute based on the attention duration corresponding to each user in the crowd set with each type of attribute;
the control module is used for controlling the display state of the display area in the display picture by utilizing the crowd state data:
and controlling the attention state effect graph of the crowd set with each type of attribute to be displayed in the display picture based on the attention total time length of the crowd set with each type of attribute.
In one possible embodiment, the users in the set of people for each category of attributes meet at least two attributes of age, gender, charm, mood.
In one possible embodiment, the focus state effect graph is displayed by a specific graph; the total attention duration of the crowd set with each type of attribute is different, and the display effect of the corresponding specific image is also different.
In a possible implementation, the first determining module is further configured to determine the attention duration according to the following manner:
for any user, identifying face orientation data of the user in the collected video picture;
recording the time when the user pays attention to the display object in the display area for the first time under the condition that the face orientation data indicates that the user pays attention to the display object;
and determining the attention duration of the user paying attention to the display object in the first time period based on the recorded time of the user paying attention to the display object for the first time, wherein the attention duration does not exceed the first time period.
In one possible embodiment, the face orientation data includes a pitch angle and a yaw angle of the face;
the first determining module, when detecting that the face orientation data indicates that the user is interested in a presentation object of the presentation area, is configured to:
determining that the face orientation data indicates that the user is interested in a display object of the display area if the pitch angle is within a first angular range and the yaw angle is within a second angular range.
In a possible embodiment, the first determining module is further configured to determine the number of people in the display area within the first time period according to the following manner:
determining the total occurrence number of each user in the display area in the first time period;
recording the starting time of each time the user enters the display area for any user; if the recorded time length of the interval between the two times of starting time exceeds a first time length, determining that the user appears once; the first duration does not exceed a total duration of the first time period.
In a possible embodiment, the first determining module is further configured to determine a user stay time of the presentation area in the first time period according to the following manner:
determining the total length of stay of each user in each occurrence in the first time period;
and aiming at any user, under the condition that the time length of the interval between the starting times of the user entering the display area twice is determined to exceed a first time length, the time length of the interval between the starting times of the user entering the display area twice is taken as the time length of the stay of the user when the user appears once.
In a possible implementation manner, the obtaining module is further configured to obtain a trigger operation for the display area in the display screen before controlling the display state of the display area in the display screen by using the crowd state data.
In a possible implementation manner, the obtaining module, when obtaining the trigger operation for the display area in the display screen, is configured to:
controlling the display interface to display a gesture prompt box, wherein the gesture prompt box comprises gesture description content;
detecting a user gesture in a video picture acquired by the camera equipment;
and under the condition that the user gesture is consistent with the gesture recorded by the gesture description content, confirming to acquire the trigger operation aiming at the display area in the display picture.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions being executable by the processor to perform the steps of the method for identifying a state of a display area according to the first aspect or any one of the embodiments.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for identifying a state of a presentation area according to the first aspect or any one of the embodiments.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a schematic flowchart illustrating a method for identifying a status of a display area according to an embodiment of the present disclosure;
FIG. 2 is a schematic illustration showing display areas with different color levels in a display screen according to an embodiment of the disclosure;
FIG. 3 is a diagram illustrating a population profile for each attribute in a display provided by an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a comparison result between a variation trend of the number of people in passenger flow and a variation trend of the number of people in passenger flow provided by the embodiment of the disclosure;
fig. 5 is a flowchart illustrating a method for determining a duration of attention of a user according to an embodiment of the present disclosure;
FIG. 6 illustrates a state of interest effect graph provided by embodiments of the present disclosure;
FIG. 7 is a schematic diagram illustrating a gesture prompt box provided by an embodiment of the present disclosure;
FIG. 8a illustrates another display interface schematic provided by embodiments of the present disclosure;
FIG. 8b illustrates another display interface schematic provided by embodiments of the present disclosure;
fig. 9 is a schematic diagram illustrating an architecture of a state identification apparatus for a display area according to an embodiment of the present disclosure;
fig. 10 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In the related art, when monitoring each display area, a mode of monitoring the crowd density in the current display area is generally adopted, however, specific analysis on the passenger flow crowd in the display area cannot be realized only according to the counted crowd density, for example, analysis on the time length of attention of the passenger flow crowd in the display area, attribute information of users in the passenger flow crowd and the like cannot be realized, and further analysis on the degree of interest of the users in the display area cannot be realized.
Based on this, in the state identification method for the display area provided by the disclosure, the passenger flow data of the display area in the first time period can be determined based on the collected video image, and the determined passenger flow data includes at least two of the staying time of each user appearing in the display area, the attention time of each user, the attribute information of each user, the number of passengers in the passenger flow and the number of people in the passenger flow, so that when the crowd state data is determined based on the passenger flow data, at least two features in the passenger flow data can be combined, and therefore, when the display state of the display area in the display image is controlled based on the crowd state data, the crowd features in the display area can be displayed more accurately and intuitively.
The state identification method of the display area provided by the embodiment of the disclosure can be applied to a server or a terminal device supporting a display function. The server may be a local server or a cloud server, and the terminal device may be a smart phone, a tablet computer, a Personal Digital Assistant (PDA), a smart television, and the like, which is not limited in the present application.
When the embodiment of the disclosure is applied to a server, the data in the disclosure may be acquired from a terminal device and/or a camera device, and the display state of the display area in the display screen may be controlled, or the control of the display state of the display area in the display screen of the terminal device may be realized by issuing a control instruction to the terminal device.
For the convenience of understanding the embodiments of the present disclosure, a method for identifying a state of a display area disclosed in the embodiments of the present disclosure will be described in detail first.
Referring to fig. 1, a schematic flow chart of a method for identifying a status of a display area according to an embodiment of the present disclosure includes:
s101, acquiring a video picture acquired by the camera equipment deployed in the display area.
S102, determining passenger flow data of the display area in a first time period based on the collected video pictures, wherein the passenger flow data comprises at least two of the stay time of each user appearing in the display area, the attention time of each user, the attribute information of each user, the number of passengers and the number of passengers.
S103, determining crowd state data of the display area in the first time period by using the passenger flow data.
And S104, controlling the display state of the display area in the display picture by utilizing the crowd state data.
The method provided by the embodiment can determine passenger flow data based on the collected video picture, determine crowd state data of the display area in the first time period by using the passenger flow data, and control the display state of the display area in the display picture by using the crowd state data. The display state of the display area can be used for prompting crowd state data in the display area, so that dynamic monitoring on the crowd state of the display area can be realized, and concerned crowds in the display area can be analyzed.
When the display picture comprises a plurality of display areas, different display areas in the display picture can be controlled to present different display states according to the crowd state data of different display areas, and then the crowd states of different display areas can be contrasted and analyzed.
In one embodiment of the disclosure, the crowd status data includes crowd heat ratings. When the crowd heat level is determined, the crowd heat level can be determined based on the weight coefficients respectively corresponding to at least two types of passenger flow data in the passenger flow data and the numerical values of the at least two types of passenger flow data, and then the crowd heat level corresponding to the range of the crowd heat level is determined.
Specifically, the crowd heat value range corresponding to different crowd heat levels can be preset, for example, the crowd heat value range corresponding to the first-level crowd heat is 0-10, the crowd heat value range corresponding to the second-level crowd heat is 11-20, and the crowd heat value range corresponding to the third-level crowd heat is 21-30, after the crowd heat value is determined according to the numerical value of the passenger flow data and the weight coefficient of each passenger flow data, the range to which the crowd heat value belongs can be determined, and then the corresponding crowd heat level is determined; for example, if the determined population heat value is 25, the corresponding population heat level may be determined to be a third level population heat.
For example, if the crowd heat value is calculated based on the number of people in the passenger flow and the number of people in the passenger flow, the calculation formula may be as follows:
p=ax1+bx2
wherein p represents the population heat value, x1Value, x, representing the number of people in the passenger flow2The value of the number of the passenger flow is shown, a is the weight of the number of the passenger flow, and b is the weight of the number of the passenger flow.
For example, the crowd heat value may be calculated based on the combination of the number of people in the passenger flow and the staying time, or the crowd heat value may be calculated based on the combination of the number of people in the passenger flow and the attention time, and the like, which is not limited in the present application.
When the crowd state data is used for controlling the display state of the display area in the display picture, the layer of the display area corresponding to the display area in the display picture can be controlled to present a color grade matched with the crowd heat grade.
The color grades matched with different population heat grades can be different, for example, the color grade matched with the first-level population heat grade can be green, the color matched with the second-level population heat grade can be yellow, and the color matched with the third-level population heat grade can be red; alternatively, different color levels may present different gray values. The color grade matched with the heat grades of different crowds can be preset.
After the crowd heat level is determined, a color level matched with the crowd heat level can be determined, and then the layer of the display area corresponding to the display area in the display picture is controlled to present the color of the color level. For example, the display diagram of the display areas with different color levels in the display screen may be as shown in fig. 2, where different gray areas in fig. 2 represent different display areas, the gray value represents the popularity of the crowd in the display area, and the higher the gray value, the higher the popularity of the crowd.
When the display frame comprises a plurality of display areas, the crowd popularity information of different display areas can be displayed according to the colors of the layers of the display areas corresponding to the different display areas, and the display mode is more visual.
In another embodiment of the present disclosure, the display area in the display screen may be further controlled to display a display state related to the passenger flow data, for example, the display state related to different types of passenger flow data may be displayed in the display screen controlled according to different types of passenger flow data.
In one possible embodiment, determining crowd status data for a presentation area during a first time period using passenger flow data comprises: and determining crowd distribution data under different attributes based on the attribute information of each user, wherein the crowd distribution data comprises distribution data under at least one of the attributes of age, gender, charm value and emotion. For example, the demographics of the crowd distribution data may be calculated based on the number of people in the passenger flow, in which case the passenger flow data includes at least attribute information of each user appearing in the display area and the number of people in the passenger flow.
Correspondingly, the controlling the display state of the display area in the display picture by using the crowd state data comprises the following steps: and controlling the display picture to respectively display the crowd distribution diagram under each attribute by utilizing the crowd distribution data under different attributes.
The crowd distribution map under each attribute shown in the display screen may be a pie chart, a ring chart, a bar chart, or the like, for example, the crowd distribution map under each attribute in the display screen may be as shown in fig. 3, the attributes shown in fig. 3 are gender, age, charm value, and emotion, respectively, and show the proportion of attribute values of different values under each attribute, wherein a legend beside each attribute in fig. 3 is used to indicate the proportion of values of different attributes of the attribute, for example, "XXXX" beside a passenger flow age distribution may indicate different age intervals, and "XXX" may indicate the proportion of people in the age interval to the total number of people (or may indicate the value of people in the age interval); "XXX" next to the distribution of passenger flow gender may represent the proportion of users of different genders (or may represent the number of people of the corresponding gender); "XXXXXX" next to the charm distribution may represent different charm value intervals, "XXX" may represent the proportion of people in the charm value interval to the headcount (or may represent the people value in the charm value interval); "XXXXXX" next to "mood distribution" may represent different types of mood, "XXX" may represent the proportion of people of that type of mood to the population (or may represent the value of people of that type of mood).
In another embodiment of the present disclosure, the crowd status data includes a number of people in the flow and a number of people in the flow at a plurality of time nodes in the first time period. When the number of people in the passenger flow and the number of people in the passenger flow under the multiple time nodes in the first time period are determined, the number of people in the passenger flow and the number of people in the passenger flow can be obtained from the number of people in the passenger flow data and the number of people in the passenger flow data.
In order to reduce the amount of calculation, when the number of people in the display area and the number of people in the display area in the first time period are determined, the number of people in the display area and the number of people in the display area may be determined once every preset time period, for example, the number of people in the display area and the number of people in the display area may be determined once every 1 minute.
Specifically, when the number of people in the passenger flow is determined, the total number of occurrences of each user in the display area in the first time period may be determined. The method comprises the steps that for any user, the starting time of the user entering a display area each time can be recorded, if the time length of an interval between two recorded starting times exceeds a first time length, the user is determined to appear once, and the first time length does not exceed the total time length of a first time period.
Illustratively, if the starting time of the user entering the display area A is respectively 10:00, 10:08 and 10:11, the first time duration is 10 minutes, the time duration of the interval between 10:00 and 10:08 is less than the first time duration, the user is determined to appear zero times, and the time duration of the interval between 10:00 and 10:11 is greater than the first time duration, the user is determined to appear once.
In the above manner, the user is determined to appear once when the time length of the interval between the two recorded start times exceeds the first time length.
When the number of people in the passenger flow is determined, the number of face images contained in the collected video pictures in a first preset time period can be obtained, then the obtained face images are subjected to duplication elimination processing, namely the faces of the users appear in the video pictures at different time, the number of people in the passenger flow corresponding to the users is determined to be 1, and the number of the face images subjected to duplication elimination processing is determined to be the number of people in the passenger flow.
Considering that the number of people in the passenger flow and the number of people in the passenger flow are changed in real time, in order to reduce the calculation amount, the number of people in the passenger flow and the number of people in the passenger flow under a plurality of time nodes in the first time period can be obtained, wherein the time intervals between any two adjacent time nodes can be the same.
In one possible implementation manner, the number of people in the first time period and the number of people in the first time period can be displayed in the same coordinate system, for example, a line graph can be drawn by taking the time node as an abscissa and the number of people in the first time period as an ordinate, then a line graph can be drawn by taking the same time node as an abscissa and the number of people in the first time period as an ordinate, and a comparison result between the variation trend of the number of people in the first time period and the variation trend of the number of people in the first time period can be displayed by the line graph; in another example of the disclosure, the comparison result between the variation trend of the number of people in the passenger flow and the variation trend of the number of times of passenger flow in the display area in the first time period may be displayed by drawing a bar chart, a pie chart, or the like.
For example, the comparison result between the variation trend of the number of people in the passenger flow and the variation trend of the number of times of the passenger flow displayed in the display screen may be as shown in fig. 4.
In the above embodiment, the comparison result between the variation trend of the number of people in the passenger flow and the variation trend of the number of times of passenger flow in the display area can be displayed through the display screen, and by this means, the variation situation of the number of people in the passenger flow and the number of times of passenger flow in the display area can be displayed more intuitively.
In another embodiment of the present disclosure, the crowd status data includes a total length of attention of the crowd set of each type of attribute, and the passenger flow data includes, for example, a length of attention of the user and attribute information of each user. Wherein the attribute information of the user includes at least two attributes of age, gender, charm value, and emotion.
When determining the attention duration of the user, the method shown in fig. 5 may include the following steps:
s501, for any user, recognizing the face orientation data of the user in the collected video picture.
The face orientation data comprises a pitch angle and a yaw angle of the face. Whether the user pays attention to the display object in the display area can be determined by judging whether the pitch angle is in the first angle range and the yaw angle is in the second angle range.
S502, recording the time when the user pays attention to the display object for the first time under the condition that the face orientation data indicates that the user pays attention to the display object in the display area.
In the case where the pitch angle in the face orientation data is within a first angular range and the yaw angle in the face orientation data is within a second angular range, it may be determined that the face orientation data indicates that the user is interested in a presentation object of the presentation area.
S503, determining the attention duration of the user paying attention to the displayed object in the first time period based on the recorded time of the plurality of users paying attention to the displayed object for the first time, wherein the attention duration does not exceed the first time period.
Specifically, when the attention duration of the user paying attention to the displayed object in the first time period is determined, the start time of the user paying attention to the displayed object in the first time period may be determined, and then, when it is determined that the duration of the interval between the start times of the user paying attention to the displayed object twice exceeds the set duration, the duration of the interval between the start times of the user paying attention to the displayed object twice is used as the attention duration of the user paying attention to the displayed object once.
For example, if the first time period is 10:00 to 10:30, the start time of the user focusing on the display object is 10:02, 10:05, 10:13, 10:15, and 10:25, respectively, and the first time period is 10 minutes, the first focusing time period is 10:13 to 10:02 to 11 minutes, and the second focusing time period is 10:25 to 10:13 to 12 minutes.
In the method, the attention duration of the user paying attention to the displayed object in the first time period can be determined only according to the recorded time of the user paying attention to the displayed object for the first time, so that frequent statistics of the attention duration caused by too short interval between the time of the user paying attention to the displayed object for the first time can be avoided, the calculation amount of the attention duration is reduced, and the calculated attention duration is more reasonable.
Specifically, when the attention duration of each user and the attribute information of each user are utilized to determine the attention total duration of the crowd set with each type of attribute in the first time period, each user can be divided into crowd sets with different types of attributes based on the attribute information of each user appearing in the display area; then, the attention duration of each user in the display area in the crowd set with each type of attributes is obtained, and the attention total duration of the crowd set with each type of attributes is determined based on the attention duration corresponding to each user in the crowd set with each type of attributes.
In one possible implementation manner, the attention state effect graph of the crowd set of each type of attribute can be displayed in the display screen by utilizing the attention total time length of the crowd set of each type of attribute in the first time period.
Wherein, the focus effect graph can be displayed through a specific graph; the total attention duration of the crowd set with each type of attribute is different, and the display effect of the corresponding specific image is also different. For example, the attention status effect graph may be as shown in fig. 6, where fig. 6 illustrates distribution of users with different types of attributes concerning total duration within the same interval, taking attributes as gender and age as examples, a subset of people with gender as male in the crowd set and a subset of people with gender as female in the crowd set may be determined, then an age subset of each age group in the subset of people with gender as male may be determined, an age subset of each age group in the subset of people with gender as female may be determined, and then diameters of corresponding circles may be determined according to the determined age subsets, where "XXXX" in the circles represents different age intervals. The divided subsets with different genders and different age groups can be regarded as the crowd sets with different attributes.
Further, the attention duration of each user in the crowd sets with different attributes can be counted according to the calculation mode of the attention duration, and then the attention state effect graph of the crowd set with various attributes is displayed based on the attention total duration of each user in the crowd sets with different attributes. For example, as shown in fig. 6, the larger the focus state effect diagram size is.
In the above embodiment, the crowd distribution map under each attribute can be displayed according to the crowd distribution data under different attributes, so that analysis of crowds with different attributes with different total attention durations can be realized.
In another embodiment of the present disclosure, the crowd status data includes a total staying time of the crowd set with each type of attribute, and the passenger flow data includes staying time of each user and attribute information of each user appearing in the display area in the first time period.
When the stay time length of each user appearing in the display area is determined, the total stay time length of each user appearing in the first time period can be determined; and aiming at any user, under the condition that the time length of the interval between the starting times of the two times of entering of the user into the display area exceeds the first time length, taking the time length of the interval between the starting times of the two times of entering into the display area as the time length of the stay of the user when the user appears once.
For example, if the first time period is 10:00 to 10:30, the start time of the user entering the display area is 10:02, 10:05, 10:13, 10:15, and 10:25, respectively, and the first time period is 10 minutes, the first staying time period is 10:13 to 10:02 to 11 minutes, and the second staying time period is 10:25 to 10:13 to 12 minutes.
In a possible implementation manner, the total staying time of the crowd sets of each type of attributes can be determined by using the staying time lengths of the users and the attribute information of the users, which appear in the display area in the first time period, and then the staying state effect graph of the crowd sets of each type of attributes can be displayed in the display screen by using the total staying time length of the crowd sets of each type of attributes.
Specifically, each user can be divided into crowd sets with different attributes based on the attribute information of each user appearing in the display area; then, the stay time of each user in the display area in the crowd set with each type of attributes is obtained; and determining the total stay time of the crowd set with each type of attribute based on the stay time corresponding to each user in the crowd set with each type of attribute.
The stay state effect graph is displayed through specific images, the total stay time of the crowd set with each type of attributes is different, and the display effect of the corresponding specific images is also different. Illustratively, the stay state effect graph can also be as shown in fig. 3, but here, fig. 3 shows the distribution of stay time lengths of users with different attributes in the same interval range.
In the above embodiment, the crowd distribution map under each attribute can be displayed according to the crowd distribution data under different attributes, so that analysis of crowds with different attributes with different total stay durations can be realized.
In an example of the present disclosure, after the trigger operation for the display area in the display screen is acquired, the display state of the display area in the display screen may be controlled by using the crowd state data.
The trigger operation for acquiring the display area in the display picture may be to control the display interface to display a gesture prompt box, where the gesture prompt box includes gesture description content, then detect a user gesture in a video picture acquired by the camera device, and confirm the trigger operation for acquiring the display area in the display picture when detecting that the user gesture is consistent with a gesture recorded by the gesture description content.
For example, as shown in fig. 7, the gesture prompt box may include gesture description information of "please see that you are right inside the view frame and hold hands for confirmation", and the human body gesture shown by the bar portion is used to prompt the user to make the action gesture at the corresponding position. When the fact that a hand lifting action is made on the line part of the user viewfinder is detected, the trigger operation aiming at the display area in the display picture is determined to be acquired.
In another example of the present disclosure, a trigger operation for a display area in a display screen is acquired, and it may also be detected that a position corresponding to the display area in a screen is clicked, where the clicking manner includes but is not limited to clicking, double clicking, long pressing, and double pressing; or acquiring the trigger operation for the display area in the display screen, or receiving a voice instruction input by a user, then analyzing the voice instruction, and further determining to acquire the trigger operation for the display area in the display screen.
In connection with the above embodiments, the following specific examples are given:
when the passenger flow data appears in the staying time of each user, the attention time of each user, the attribute information of each user, the passenger flow number and the passenger flow number of people in the display area, the initially displayed display interface can be as shown in fig. 8a, gray values of areas with different heat levels in the left display area of fig. 8a are different, and comparison results between the variation trend of the passenger flow number and the variation trend of the passenger flow number in the display area in the first time period, a crowd distribution diagram under each attribute, and passenger flow data statistical results are displayed in the right display area, wherein the passenger flow data statistical results comprise the current passenger flow number, the total daily staying time, the historical passenger flow number, the people average staying time and the like; when it is detected that any one of the left display areas is triggered, the display interface may display as shown in fig. 8b, the attention state effect graph displayed in the right display area in fig. 8a is replaced with an interface graph corresponding to the triggered display area, and the interface graph may display passenger flow data statistics information (including the number of passengers, the length of stay, and the like) of the triggered display area, a passenger flow number/number variation trend, attention degree crowd analysis (such as the attention state effect graph), and the like. In the passenger flow data statistics results shown in fig. 8a and 8b, each value may be changed in real time, and statistics results of the passenger flow data within a period of time such as today, 7 days, 30 days, and the like may be counted. The expression "this day" means from this day to the present day.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same concept, an embodiment of the present disclosure further provides a device for identifying a state of a display area, as shown in fig. 9, which is an architectural schematic diagram of the device for identifying a state of a display area provided in the embodiment of the present disclosure, and the device includes an obtaining module 901, a first determining module 902, a second determining module 903, and a control module 904, specifically:
an obtaining module 901, configured to obtain a video picture acquired by a camera device deployed in a display area;
a first determining module 902, configured to determine, based on the acquired video picture, passenger flow data of the display area in a first time period, where the passenger flow data includes at least two of a staying time of each user appearing in the display area, a time length of interest of each user, attribute information of each user, a number of people in the passenger flow, and a number of people in the passenger flow;
a second determining module 903, configured to determine, by using the passenger flow data, crowd state data of the display area in the first time period;
a control module 904, configured to control a display state of the display area in the display screen according to the crowd status data.
In one possible embodiment, the crowd status data includes a crowd heat rating;
the control module 904, when controlling the display state of the display area in the display screen by using the crowd status data, is configured to:
and controlling the layer of the display area corresponding to the display area in the display picture to present a color grade matched with the crowd heat grade.
In a possible implementation, the second determining module 903 is further configured to determine the crowd heat level according to the following manner:
and determining a crowd heat value based on the weight coefficients respectively corresponding to at least two kinds of passenger flow data in the passenger flow data and the numerical values of the at least two kinds of passenger flow data, and determining the crowd heat level corresponding to the range of the crowd heat value.
In a possible implementation manner, in the case that the passenger flow data includes the number of people in the passenger flow and the number of people in the passenger flow, the second determining module 903, when determining the crowd status data of the display area in the first time period by using the passenger flow data, is configured to:
acquiring the number of passenger flows and the number of passenger flows under a plurality of time nodes in the first time period by using the number of passenger flows and the number of passenger flows;
the control module 904, when controlling the display state of the display area in the display screen by using the crowd status data, is configured to:
and displaying a comparison result between the variation trend of the number of the passenger flow persons in the display area and the variation trend of the number of the passenger flow persons in the display area in the first time period in the display picture based on the number of the passenger flow persons and the number of the passenger flow persons in a plurality of time nodes of the first time period.
In one possible embodiment, the second determining module 903, when determining the crowd status data of the display area in the first time period by using the passenger flow data, is configured to:
determining crowd distribution data under different attributes based on attribute information of each user, wherein the crowd distribution data comprises distribution data under at least one attribute of age, gender, charm value and emotion;
the control module 904, when controlling the display state of the display area in the display screen by using the crowd status data, is configured to:
and controlling the display picture to respectively display the crowd distribution diagram under each attribute by using the crowd distribution data under different attributes.
In a possible implementation manner, in a case that the passenger flow data includes a duration of interest of each user and attribute information of each user, the second determining module 903, when determining the crowd status data of the display area in the first time period by using the passenger flow data, is configured to:
based on the attribute information of each user appearing in the display area, dividing each user into crowd sets with different attributes;
acquiring the attention duration of each user in the display area in the crowd set with each type of attributes;
determining the attention total duration of the crowd set with each type of attribute based on the attention duration corresponding to each user in the crowd set with each type of attribute;
the control module 904, when controlling the display state of the display area in the display screen by using the crowd status data, is configured to:
and controlling the attention state effect graph of the crowd set with each type of attribute to be displayed in the display picture based on the attention total time length of the crowd set with each type of attribute.
In one possible embodiment, the users in the set of people for each category of attributes meet at least two attributes of age, gender, charm, mood.
In one possible embodiment, the focus state effect graph is displayed by a specific graph; the total attention duration of the crowd set with each type of attribute is different, and the display effect of the corresponding specific image is also different.
In a possible implementation, the first determining module 902 is further configured to determine the attention duration according to the following manner:
for any user, identifying face orientation data of the user in the collected video picture;
recording the time when the user pays attention to the display object in the display area for the first time under the condition that the face orientation data indicates that the user pays attention to the display object;
and determining the attention duration of the user paying attention to the display object in the first time period based on the recorded time of the user paying attention to the display object for the first time, wherein the attention duration does not exceed the first time period.
In one possible embodiment, the face orientation data includes a pitch angle and a yaw angle of the face;
the first determining module 902, when detecting that the face orientation data indicates that the user is interested in a presentation object of the presentation area, is configured to:
determining that the face orientation data indicates that the user is interested in a display object of the display area if the pitch angle is within a first angular range and the yaw angle is within a second angular range.
In a possible embodiment, the first determining module 902 is further configured to determine the number of people in the display area in the first time period according to the following manner:
determining the total occurrence number of each user in the display area in the first time period;
recording the starting time of each time the user enters the display area for any user; if the recorded time length of the interval between the two times of starting time exceeds a first time length, determining that the user appears once; the first duration does not exceed a total duration of the first time period.
In a possible embodiment, the first determining module 902 is further configured to determine the user stay time of the presentation area in the first time period according to the following manner:
determining the total length of stay of each user in each occurrence in the first time period;
and aiming at any user, under the condition that the time length of the interval between the starting times of the user entering the display area twice is determined to exceed a first time length, the time length of the interval between the starting times of the user entering the display area twice is taken as the time length of the stay of the user when the user appears once.
In a possible implementation manner, the obtaining module 901 is further configured to obtain a trigger operation for the display area in the display screen before controlling the display state of the display area in the display screen by using the crowd state data.
In a possible implementation manner, the obtaining module 901, when obtaining the trigger operation for the display area in the display screen, is configured to:
controlling the display interface to display a gesture prompt box, wherein the gesture prompt box comprises gesture description content;
detecting a user gesture in a video picture acquired by the camera equipment;
and under the condition that the user gesture is consistent with the gesture recorded by the gesture description content, confirming to acquire the trigger operation aiming at the display area in the display picture.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 10, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 1001, a memory 1002, and a bus 1003. The memory 1002 is used for storing execution instructions, and includes a memory 10021 and an external memory 10022; the memory 10021 is also referred to as a memory, and is used for temporarily storing operation data in the processor 1001 and data exchanged with the external memory 10022 such as a hard disk, the processor 1001 exchanges data with the external memory 10022 through the memory 10021, and when the electronic device 1000 operates, the processor 1001 and the memory 1002 communicate with each other through the bus 1003, so that the processor 1001 executes the following instructions:
acquiring a video picture acquired by camera equipment deployed in a display area;
determining passenger flow data of the display area in a first time period based on the collected video picture, wherein the passenger flow data comprises at least two of the staying time of each user appearing in the display area, the concerned time of each user, the attribute information of each user, the number of passengers and the number of passengers;
determining crowd state data of the display area in the first time period by using the passenger flow data;
and controlling the display state of the display area in the display picture by utilizing the crowd state data.
In addition, the present disclosure also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method for identifying the state of the display area in the above method embodiments are performed.
The computer program product of the method for identifying a state of a display area provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the method for identifying a state of a display area described in the embodiments of the method.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (17)

1. A state identification method for a display area is characterized by comprising the following steps:
acquiring a video picture acquired by camera equipment deployed in a display area;
determining passenger flow data of the display area in a first time period based on the collected video picture, wherein the passenger flow data comprises at least two of the staying time of each user appearing in the display area, the concerned time of each user, the attribute information of each user, the number of passengers and the number of passengers;
determining crowd state data of the display area in the first time period by using the passenger flow data;
and controlling the display state of the display area in the display picture by utilizing the crowd state data.
2. The method of claim 1, wherein the crowd status data comprises a crowd heat rating;
the utilizing the crowd state data to control the display state of the display area in the display picture comprises the following steps:
and controlling the layer of the display area corresponding to the display area in the display picture to present a color grade matched with the crowd heat grade.
3. The method of claim 2, wherein the population heat rating is determined according to:
and determining a crowd heat value based on the weight coefficients respectively corresponding to at least two kinds of passenger flow data in the passenger flow data and the numerical values of the at least two kinds of passenger flow data, and determining the crowd heat level corresponding to the range of the crowd heat value.
4. The method of claim 1, wherein in the event the passenger flow data includes the number of people in the passenger flow and the number of people in the passenger flow, the determining crowd status data for the presentation area during the first time period using the passenger flow data comprises:
acquiring the number of passenger flows and the number of passenger flows under a plurality of time nodes in the first time period by using the number of passenger flows and the number of passenger flows;
the utilizing the crowd state data to control the display state of the display area in the display picture comprises the following steps:
and displaying a comparison result between the variation trend of the number of the passenger flow persons in the display area and the variation trend of the number of the passenger flow persons in the display area in the first time period in the display picture based on the number of the passenger flow persons and the number of the passenger flow persons in a plurality of time nodes of the first time period.
5. The method of claim 1, wherein said determining crowd status data for the presentation area during the first time period using the passenger flow data comprises:
determining crowd distribution data under different attributes based on attribute information of each user, wherein the crowd distribution data comprises distribution data under at least one attribute of age, gender, charm value and emotion;
the utilizing the crowd state data to control the display state of the display area in the display picture comprises the following steps:
and controlling the display picture to respectively display the crowd distribution diagram under each attribute by using the crowd distribution data under different attributes.
6. The method according to claim 1, wherein in the case that the passenger flow data includes the attention duration of each user and the attribute information of each user, the determining the crowd status data of the display area in the first time period by using the passenger flow data includes:
based on the attribute information of each user appearing in the display area, dividing each user into crowd sets with different attributes;
acquiring the attention duration of each user in the display area in the crowd set with each type of attributes;
determining the attention total duration of the crowd set with each type of attribute based on the attention duration corresponding to each user in the crowd set with each type of attribute;
the utilizing the crowd state data to control the display state of the display area in the display picture comprises the following steps:
and controlling the attention state effect graph of the crowd set with each type of attribute to be displayed in the display picture based on the attention total time length of the crowd set with each type of attribute.
7. The method of claim 6, wherein the users in the set of people for each type of attribute are matched to at least two of age, gender, charm, and mood.
8. The method according to claim 6, wherein the focus state effect graph is displayed by a specific graphic; the total attention duration of the crowd set with each type of attribute is different, and the display effect of the corresponding specific image is also different.
9. The method according to any of claims 1, 6 to 8, wherein the length of attention is determined according to:
for any user, identifying face orientation data of the user in the collected video picture;
recording the time when the user pays attention to the display object in the display area for the first time under the condition that the face orientation data indicates that the user pays attention to the display object;
and determining the attention duration of the user paying attention to the display object in the first time period based on the recorded time of the user paying attention to the display object for the first time, wherein the attention duration does not exceed the first time period.
10. The method of claim 9, wherein the face orientation data comprises pitch and yaw angles of the face;
the detecting that the face orientation data indicates that the user is interested in a presentation object of the presentation area comprises:
determining that the face orientation data indicates that the user is interested in a display object of the display area if the pitch angle is within a first angular range and the yaw angle is within a second angular range.
11. The method of claim 1 or 4, wherein the number of people in the presentation area during the first time period is determined according to the following:
determining the total occurrence number of each user in the display area in the first time period;
recording the starting time of each time the user enters the display area for any user; if the recorded time length of the interval between the two times of starting time exceeds a first time length, determining that the user appears once; the first duration does not exceed a total duration of the first time period.
12. The method of claim 1, wherein the length of time the user remains in the presentation area during the first time period is determined according to:
determining the total length of stay of each user in each occurrence in the first time period;
and aiming at any user, under the condition that the time length of the interval between the starting times of the user entering the display area twice is determined to exceed a first time length, the time length of the interval between the starting times of the user entering the display area twice is taken as the time length of the stay of the user when the user appears once.
13. The method according to any one of claims 1 to 12, further comprising, before controlling the display state of the presentation area in the display using the crowd status data:
and acquiring the trigger operation aiming at the display area in the display picture.
14. The method according to claim 13, wherein the acquiring of the trigger operation for the presentation area in the display screen comprises:
controlling the display interface to display a gesture prompt box, wherein the gesture prompt box comprises gesture description content;
detecting a user gesture in a video picture acquired by the camera equipment;
and under the condition that the user gesture is consistent with the gesture recorded by the gesture description content, confirming to acquire the trigger operation aiming at the display area in the display picture.
15. A state recognition apparatus for a display area, comprising:
the acquisition module is used for acquiring video pictures acquired by the camera equipment deployed in the display area;
the first determining module is used for determining passenger flow data of the display area in a first time period based on the collected video pictures, wherein the passenger flow data comprises at least two of the stay time of each user appearing in the display area, the attention time of each user, the attribute information of each user, the number of passenger flow people and the number of passenger flow people;
the second determining module is used for determining the crowd state data of the display area in the first time period by using the passenger flow data;
and the control module is used for controlling the display state of the display area in the display picture by utilizing the crowd state data.
16. An electronic device, comprising: processor, memory and bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the steps of the method of status identification of a presentation area according to any one of claims 1 to 14.
17. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method for status identification of a presentation area according to any one of claims 1 to 14.
CN201911416336.5A 2019-12-31 2019-12-31 State identification method, device and equipment for display area and storage medium Withdrawn CN110942055A (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201911416336.5A CN110942055A (en) 2019-12-31 2019-12-31 State identification method, device and equipment for display area and storage medium
PCT/CN2020/105284 WO2021135196A1 (en) 2019-12-31 2020-07-28 Status recognition method and device for display region, electronic apparatus, and storage medium
JP2021528437A JP2022519149A (en) 2019-12-31 2020-07-28 Exhibition area State recognition methods, devices, electronic devices, and recording media
KR1020217015862A KR20210088600A (en) 2019-12-31 2020-07-28 Exhibition area state recognition method, apparatus, electronic device and recording medium
TW109129710A TW202127348A (en) 2019-12-31 2020-08-31 Method of a state recognition in an exhibition area, apparatus thereof, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911416336.5A CN110942055A (en) 2019-12-31 2019-12-31 State identification method, device and equipment for display area and storage medium

Publications (1)

Publication Number Publication Date
CN110942055A true CN110942055A (en) 2020-03-31

Family

ID=69913661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911416336.5A Withdrawn CN110942055A (en) 2019-12-31 2019-12-31 State identification method, device and equipment for display area and storage medium

Country Status (5)

Country Link
JP (1) JP2022519149A (en)
KR (1) KR20210088600A (en)
CN (1) CN110942055A (en)
TW (1) TW202127348A (en)
WO (1) WO2021135196A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112307871A (en) * 2020-05-29 2021-02-02 北京沃东天骏信息技术有限公司 Information acquisition method and device, attention detection method, device and system
CN112633249A (en) * 2021-01-05 2021-04-09 北华航天工业学院 Embedded pedestrian flow detection method based on light deep learning framework
CN112987916A (en) * 2021-02-06 2021-06-18 北京智扬天地展览服务有限公司 Automobile exhibition stand interaction system and method
WO2021135196A1 (en) * 2019-12-31 2021-07-08 北京市商汤科技开发有限公司 Status recognition method and device for display region, electronic apparatus, and storage medium
CN113269032A (en) * 2021-04-12 2021-08-17 北京华毅东方展览有限公司 Exhibition early warning method and system for exhibition hall
CN114339337A (en) * 2021-12-23 2022-04-12 北京德为智慧科技有限公司 Display control method and device, electronic equipment and storage medium
CN114546104A (en) * 2021-12-23 2022-05-27 北京德为智慧科技有限公司 Display adjusting method and device, electronic equipment and storage medium
CN115083318A (en) * 2022-06-17 2022-09-20 云知声智能科技股份有限公司 AI exhibition hall automatic explanation method and system based on crowd thermal distribution monitoring

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705470A (en) * 2021-08-30 2021-11-26 北京市商汤科技开发有限公司 Method and device for acquiring passenger flow information, computer equipment and storage medium
CN114220140B (en) * 2021-11-23 2022-07-22 慧之安信息技术股份有限公司 Image recognition-based market passenger flow volume statistical method and device
CN114818397B (en) * 2022-07-01 2022-09-20 中汽信息科技(天津)有限公司 Intelligent simulation method and system for customized scene
CN116629979B (en) * 2023-07-21 2024-04-26 深圳市方度电子有限公司 Digital store management system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104239617A (en) * 2014-09-02 2014-12-24 百度在线网络技术(北京)有限公司 Thermodynamic diagram showing method and device
KR101822552B1 (en) * 2017-09-20 2018-01-26 에스제이테크 주식회사 Device for Advertizing Goods by Sensing Approach of Shopper
CN108985218A (en) * 2018-07-10 2018-12-11 上海小蚁科技有限公司 People flow rate statistical method and device, calculates equipment at storage medium

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010268158A (en) * 2009-05-13 2010-11-25 Fujifilm Corp Image processing system, method of processing image, and program
CN103514242A (en) * 2012-12-19 2014-01-15 Tcl集团股份有限公司 Intelligent interaction method and system for electronic advertising board
JP6444655B2 (en) * 2014-01-14 2018-12-26 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Display method, stay information display system, display control device, and display control method
US20150199698A1 (en) * 2014-01-14 2015-07-16 Panasonic Intellectual Property Corporation Of America Display method, stay information display system, and display control device
JP6617396B2 (en) * 2014-03-19 2019-12-11 カシオ計算機株式会社 Imaging apparatus and imaging method
WO2016185586A1 (en) * 2015-05-20 2016-11-24 三菱電機株式会社 Information processing device and interlock control method
CN105608419B (en) * 2015-12-15 2019-06-04 上海微桥电子科技有限公司 A kind of passenger flow video detecting analysis system
JP6256885B2 (en) * 2016-03-31 2018-01-10 パナソニックIpマネジメント株式会社 Facility activity analysis apparatus, facility activity analysis system, and facility activity analysis method
JP2017211932A (en) * 2016-05-27 2017-11-30 大日本印刷株式会社 Information processing device, information processing system, program and information processing method
CN106682637A (en) * 2016-12-30 2017-05-17 深圳先进技术研究院 Display item attraction degree analysis and system
JP6724827B2 (en) * 2017-03-14 2020-07-15 オムロン株式会社 Person trend recorder
CN108647242B (en) * 2018-04-10 2022-04-29 北京天正聚合科技有限公司 Generation method and system of thermodynamic diagram
CN111178294A (en) * 2019-12-31 2020-05-19 北京市商汤科技开发有限公司 State recognition method, device, equipment and storage medium
CN110942055A (en) * 2019-12-31 2020-03-31 北京市商汤科技开发有限公司 State identification method, device and equipment for display area and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104239617A (en) * 2014-09-02 2014-12-24 百度在线网络技术(北京)有限公司 Thermodynamic diagram showing method and device
KR101822552B1 (en) * 2017-09-20 2018-01-26 에스제이테크 주식회사 Device for Advertizing Goods by Sensing Approach of Shopper
CN108985218A (en) * 2018-07-10 2018-12-11 上海小蚁科技有限公司 People flow rate statistical method and device, calculates equipment at storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021135196A1 (en) * 2019-12-31 2021-07-08 北京市商汤科技开发有限公司 Status recognition method and device for display region, electronic apparatus, and storage medium
CN112307871A (en) * 2020-05-29 2021-02-02 北京沃东天骏信息技术有限公司 Information acquisition method and device, attention detection method, device and system
CN112633249A (en) * 2021-01-05 2021-04-09 北华航天工业学院 Embedded pedestrian flow detection method based on light deep learning framework
CN112987916A (en) * 2021-02-06 2021-06-18 北京智扬天地展览服务有限公司 Automobile exhibition stand interaction system and method
CN113269032A (en) * 2021-04-12 2021-08-17 北京华毅东方展览有限公司 Exhibition early warning method and system for exhibition hall
CN114339337A (en) * 2021-12-23 2022-04-12 北京德为智慧科技有限公司 Display control method and device, electronic equipment and storage medium
CN114546104A (en) * 2021-12-23 2022-05-27 北京德为智慧科技有限公司 Display adjusting method and device, electronic equipment and storage medium
CN115083318A (en) * 2022-06-17 2022-09-20 云知声智能科技股份有限公司 AI exhibition hall automatic explanation method and system based on crowd thermal distribution monitoring

Also Published As

Publication number Publication date
JP2022519149A (en) 2022-03-22
TW202127348A (en) 2021-07-16
KR20210088600A (en) 2021-07-14
WO2021135196A1 (en) 2021-07-08

Similar Documents

Publication Publication Date Title
CN110942055A (en) State identification method, device and equipment for display area and storage medium
JP4876687B2 (en) Attention level measuring device and attention level measuring system
JP6256885B2 (en) Facility activity analysis apparatus, facility activity analysis system, and facility activity analysis method
US10185965B2 (en) Stay duration measurement method and system for measuring moving objects in a surveillance area
CN111563396A (en) Method and device for online identifying abnormal behavior, electronic equipment and readable storage medium
CN110009364B (en) Industry identification model determining method and device
US10410333B2 (en) Product monitoring device, product monitoring system, and product monitoring method
DE112015002842T5 (en) Customer service evaluation device, customer service evaluation system, and customer service evaluation method
WO2011148884A1 (en) Content output device, content output method, content output program, and recording medium with content output program thereupon
JP2015186202A (en) Residence condition analysis device, residence condition analysis system and residence condition analysis method
CN107480624B (en) Permanent resident population's acquisition methods, apparatus and system, computer installation and storage medium
CN111178294A (en) State recognition method, device, equipment and storage medium
US20130102854A1 (en) Mental state evaluation learning for advertising
JP2009151408A (en) Marketing data analyzing method, marketing data analysis system, data analyzing server device, and program
CN106408363A (en) Image processing apparatus and image processing method
JP2012252613A (en) Customer behavior tracking type video distribution system
JP6593949B1 (en) Information processing apparatus and marketing activity support apparatus
CN114510641A (en) Flow statistical method, device, computer equipment and storage medium
CN108875677B (en) Passenger flow volume statistical method and device, storage medium and terminal
CN113591663A (en) Exhibition data visualization information analysis system and analysis method
CN109785114A (en) Credit data methods of exhibiting, device, equipment and medium for audit of providing a loan
JP5104289B2 (en) Action history display system, action history display program
CN112837108A (en) Information processing method and device and electronic equipment
JP7003883B2 (en) A system for assessing the degree of similarity in psychological states between audiences
Schnugg et al. Communicating identity or status? A media analysis of art works visible in photographic portraits of business executives

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40016828

Country of ref document: HK

WW01 Invention patent application withdrawn after publication

Application publication date: 20200331

WW01 Invention patent application withdrawn after publication