CN112036345A - Method for detecting number of people in target place, recommendation method, detection system and medium - Google Patents

Method for detecting number of people in target place, recommendation method, detection system and medium Download PDF

Info

Publication number
CN112036345A
CN112036345A CN202010923053.6A CN202010923053A CN112036345A CN 112036345 A CN112036345 A CN 112036345A CN 202010923053 A CN202010923053 A CN 202010923053A CN 112036345 A CN112036345 A CN 112036345A
Authority
CN
China
Prior art keywords
target
human head
target frame
frame
head target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010923053.6A
Other languages
Chinese (zh)
Inventor
郑瑞
唐小军
祖春山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202010923053.6A priority Critical patent/CN112036345A/en
Publication of CN112036345A publication Critical patent/CN112036345A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a detection method, a recommendation method, a detection system and a medium for the number of people in a target place. The method for detecting the number of people in the target place comprises the following steps: acquiring an environment image of a target place; determining a target detection area in the environment image; determining a facial human head target frame and a non-facial human head target frame in the target detection area; and determining the number of people entering and exiting the target place in any time period according to the facial head target frame and the non-facial head target frame of each target detection area in the time period. The method and the device realize accurate identification of the head target frame in the target detection area, and can accurately determine the number of people passing in and out of the target place in any time period.

Description

Method for detecting number of people in target place, recommendation method, detection system and medium
Technical Field
The application relates to the technical field of intelligent systems, in particular to a method, a recommendation method, a detection system and a medium for detecting the number of people in a target place.
Background
At present, in many scenes, the number of people in a certain place or area needs to be counted, for example, when a user selects a place to be used, the number of people in a queue, the number of people in and out, the use condition of resources, and the like in a certain place generally need to be known, so that a better selection can be made and the time can be saved.
Disclosure of Invention
The application provides a detection method, a recommendation method, a detection system and a medium for the number of people in a target place aiming at the defects of the existing mode, and aims to solve the technical problems that the number of people in a certain place or area cannot be accurately counted and resources cannot be reasonably utilized in the prior art.
In a first aspect, an embodiment of the present application provides a method for detecting a number of people in a target place, including:
acquiring an environment image of a target place;
determining a target detection area in the environment image;
determining a facial human head target frame and a non-facial human head target frame in the target detection area;
and determining the number of people entering and exiting the target place in any time period according to the facial head target frame and the non-facial head target frame of each target detection area in the time period.
Optionally, determining the number of people entering and exiting the target location in any time period according to the facial head target frame and the non-facial head target frame of each target detection area in the time period includes:
for each of the facial human head target frame and the non-facial human head target frame, determining whether the human head target frames meet the following conditions at the same time within any time period: the human head target frame and a reference mark in the environment image are provided with a superposition part; the intersection ratio of the head target frame of each frame and the head target frame of the previous frame is greater than an intersection ratio threshold value; the human head target frame of each frame is in the appointed direction of the human head target frame of the previous frame;
if the head target frame simultaneously meets the conditions in the time period, determining the moving direction of the head target frame as entering or leaving the target place;
and determining the number of the persons entering or leaving the target place in the time period according to the number of the head target frames entering or leaving the target place in the time period.
Optionally, the method for detecting the number of people in the target location provided in the embodiment of the present application further includes:
and determining the current queuing number of the target places according to the number of the face head target frames or the number of the people who enter and exit the target places in the first time period.
Optionally, determining a facial human head target frame and a non-facial human head target frame in the target detection region includes:
cutting out a sub-environment image containing a target detection area from the environment image;
determining each head target frame in the sub-environment image according to a head detection algorithm;
determining a human head target frame positioned in the range of the target detection area from each human head target frame of the sub-environment image;
and recognizing a facial human head target frame and a non-facial human head target frame in each human head target frame in the target detection area according to a human face recognition algorithm.
Optionally, recognizing a facial human head target frame and a non-facial human head target frame in each human head target frame in the target detection region according to a human face recognition algorithm, including:
for each human head target frame in the target detection area, identifying whether the current human head target frame and a first number of human head target frames adjacent to the current human head target frame are both facial human head target frames according to a human face identification algorithm;
if the current human head target frame and a first number of human head target frames adjacent to the current human head target frame are both facial human head target frames, determining that the current human head target frame is a facial human head target frame;
if the current human head target frame is a facial human head target frame and the first number of human head target frames adjacent to the current human head target frame are all non-facial human head target frames, determining that the current human head target frame is a non-facial human head target frame;
if the current human head target frame and a first number of human head target frames adjacent to the current human head target frame are non-facial human head target frames, determining that the current human head target frame is a non-facial human head target frame;
and if the current human head target frame is a non-facial human head target frame and the first number of human head target frames adjacent to the current human head target frame are all facial human head target frames, determining that the current human head target frame is a facial human head target frame.
In a second aspect, an embodiment of the present application provides a resource recommendation method, including:
for a plurality of target places, acquiring the current queuing number of people, the number of people who enter and exit in a second time period, place position information and request information of a user terminal of each target place; the current queuing number is the current queuing number determined by an alternative of the detection method provided by the first aspect of the embodiment of the application; the number of people who pass in and out is determined by the detection method provided by the first aspect of the embodiment of the application; the request information comprises user terminal position information;
determining the recommended value of each target place according to the current queuing number of people in each place, the number of people who enter and exit in the second time period, the place position information and the user position information;
and determining at least one target place to be recommended to send to the user terminal according to the recommendation score.
Optionally, determining the recommendation score of each target location according to the current number of people in line in each location, the number of people who enter and exit in the second time period, the location information, and the user location information, includes:
determining the resource use speed of each target place according to the number of people entering and exiting each target place in the second time period;
determining the distance between the user terminal and each target place according to the position information of the user terminal and the place position information of each target place;
and determining the recommendation score of each target place according to the current queuing number of people, the resource using speed and the distance between the user and each target place.
Optionally, the resource recommendation method provided in the embodiment of the present application further includes:
and for each target place to be recommended, determining at least one route from the current position of the user terminal to the current position of the target place according to the place position information and the position information of the user terminal, and sending the route information of the at least one route to the user terminal.
In a third aspect, an embodiment of the present application provides a detection system, including: an image acquisition device and a server;
the image acquisition device is arranged in a target place and is in communication connection with the server;
the image acquisition device is used for: acquiring an environment image of a target place, and sending the environment image to a server;
the server is used for executing to implement the detection method provided by the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the detection method provided in the first aspect of the embodiment of the present application or the resource recommendation method provided in the second aspect of the embodiment of the present application.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
1) the method and the device can determine the target detection area, the facial human head target frame and the non-facial human head target frame in the target detection area based on the environment image of the target place, and realize accurate identification of the human head target frame in the target detection area; based on the number of the accurately identified facial head target frames and the number of the non-facial head target frames, the number of the entering and exiting persons of the target place in any time period can be accurately determined, more accurate reference data are provided for the user to select the target place, the user can quickly make a selection more suitable for the user, and the waste of the selection time and resources of the user is reduced;
2) the number of the current queuing people in the target place can be accurately determined based on the number of the accurately identified face head target frames; under the condition that the head target frame of the face is not identified, the current queuing number of people in the target place can be accurately estimated based on the number of people who enter and exit the target place in the previous first time period, accurate reference data are provided for the user to select the target place, the user can conveniently and quickly make a selection more suitable for the user, and the selection time of the user and the waste of resources are reduced;
3) according to the method and the device, the recommendation score of each place can be determined based on the number information (the number of people who queue at present and the number of people who enter and exit in the second time period), the place position information and the user terminal position information of the target place, and the appropriate target place is recommended to the user terminal as a reference according to the recommendation score, so that the user can select the appropriate target place more quickly by combining the self condition, great convenience is provided for the user, and the time of the user is saved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a method for detecting the number of people in a target location according to an embodiment of the present application;
fig. 2 is a schematic diagram of an environment image area, a sub-environment image area and a target detection area in an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating another method for detecting the number of people at a target location according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a resource recommendation method according to an embodiment of the present application;
fig. 5 is a flowchart illustrating another resource recommendation method according to an embodiment of the present application;
FIG. 6 is a block diagram of a detection system according to an embodiment of the present disclosure;
FIG. 7 is a block diagram of another exemplary detection system according to the present disclosure;
fig. 8 is a schematic structural framework diagram of a server according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar parts or parts having the same or similar functions throughout. In addition, if a detailed description of the known art is not necessary for illustrating the features of the present application, it is omitted. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments.
The embodiment of the application provides a method for detecting the number of people in a target place, as shown in fig. 1, the method comprises the following steps:
s101, acquiring an environment image of a target place.
Optionally, an environment image of the target site collected by a camera device arranged inside or outside the target site may be acquired; the environment image may be an internal environment image or an external environment image of the target site depending on the actual condition of the target site and the discharge position of the image pickup device.
The target place is not limited in the embodiment of the application, and the target place can be any place or area where people count is needed, such as public places like public toilets, restaurants, airports, railway stations, and the like.
And S102, determining a target detection area in the environment image.
The target detection area may be determined according to the actual situation of the target site. For example, if the target detection area is a public toilet, the doorway area of the public toilet in the environment image may be used as the target detection area; if the target detection area is a restaurant, an entrance waiting area (a waiting area is usually provided in the restaurant) in the environment image may be used as the target detection area.
For target sites involving queuing and determination of the number of people in the queue, the target detection area may include the end of the queue within a conventional field of view.
S103, determining a facial human head target frame and a non-facial human head target frame in the target detection area.
Optionally, cutting out a sub-environment image containing the target detection area from the environment image; determining each head target frame in the sub-environment image according to a head detection algorithm; determining a human head target frame positioned in the range of the target detection area from each human head target frame of the sub-environment image; and recognizing a facial human head target frame and a non-facial human head target frame in each human head target frame in the target detection area according to a human face recognition algorithm.
In one example, fig. 2 shows an environment image 201, and a target detection area in the environment image may be an area within a trapezoidal frame 202 in fig. 2, and when determining a sub-environment image, a rectangular frame 203 may be determined as a sub-environment image including the target detection area with the center of the trapezoidal frame 202 (i.e., the target detection area) as the center.
In the example shown in fig. 2, the sub-environment image containing the trapezoid frame 202 is cut out according to the range of the rectangular frame 203, so that the human head detection algorithm can be conveniently input in the form of an image to realize human head detection on the target detection area, and the size of the sub-environment image (for example, 512 pixels) is smaller than that of the original environment image, so that unnecessary interference can be reduced in the processing process, and the detection speed can be increased.
Optionally, determining a human head target frame located in the target detection area range from each human head target frame of the sub-environment image, including:
for each human head target frame, determining the extreme value range of the central coordinate of the human head target frame according to the coordinates of the top point and the boundary of the target detection area; determining whether the central coordinate of the human head target frame is in the extreme value range; if the center coordinate of the human head target frame is in the extreme coordinate range, determining whether the center coordinate of the human head target frame is in the boundary coordinate range of the human head target frame; and if the center coordinate of the human head target frame is in the boundary coordinate range, determining that the human head target frame is positioned in the target detection area.
Each coordinate in the embodiment of the present application may be a coordinate in a plane coordinate system established in the environment image, and may be represented by (x, y).
In one example, the coordinates of the boundary of the target detection region may be expressed by a straight line formula between two adjacent vertices, and according to the straight line formula between four vertices and two adjacent vertices of the target detection region as shown in fig. 2, for the center coordinates (x1, y1) of the human head target box, the maximum value and the minimum value of 1x and y1 may be determined, respectively, as a coordinate extremum range, and it is determined whether the center coordinates (x1, y1) are within the coordinate extremum range; if the center coordinates (x1, y1) are within the extreme coordinate range, it is further determined whether the center coordinates (x1, y1) are within the range surrounded by the four straight boundary lines of the head target frame, if the center coordinates (x1, y1) are within the range surrounded by the four straight boundary lines of the head target frame, the head target frame to which the center coordinates (x1, y1) belong is kept as the head target frame in the target detection area, if the center coordinates (x1, y1) are not within the range surrounded by the four straight boundary lines of the head target frame, the head target frame to which the center coordinates (x1, y1) belong is excluded, and the head target frame is not considered in counting.
Optionally, recognizing a facial human head target frame and a non-facial human head target frame in each human head target frame in the target detection region according to a human face recognition algorithm, including:
for each human head target frame in the target detection area, identifying whether the current human head target frame and a first number of human head target frames adjacent to the current human head target frame are both facial human head target frames according to a human face identification algorithm;
if the current human head target frame and a first number of human head target frames adjacent to the current human head target frame are both facial human head target frames, determining that the current human head target frame is a facial human head target frame; if the current human head target frame is a facial human head target frame and the first number of human head target frames adjacent to the current human head target frame are all non-facial human head target frames, determining that the current human head target frame is a non-facial human head target frame; if the current human head target frame and a first number of human head target frames adjacent to the current human head target frame are non-facial human head target frames, determining that the current human head target frame is a non-facial human head target frame; and if the current human head target frame is a non-facial human head target frame and the first number of human head target frames adjacent to the current human head target frame are all facial human head target frames, determining that the current human head target frame is a facial human head target frame.
Optionally, when whether each human head target frame (such as the current human head target frame or an adjacent human head target frame) is a facial human head target frame is identified according to a face identification algorithm, whether the facial feature data of the human head target frame is greater than a facial threshold value is determined according to the face identification algorithm, if so, the human head target frame is primarily determined to be the facial human head target frame, and then, whether the human head target frame is the facial human head target frame is finally determined according to a primary determination result of the adjacent human head target frame.
The first number in the embodiment of the application can be set according to actual requirements, and the selection range of the individual head target frames of the first number can be determined according to actual conditions. For example, if the current human head target frame is located in the area between the two dashed lines as shown in fig. 2, the adjacent first data human head target frame may also be selected in the area between the two dashed lines. For target places with obvious queuing areas, adjacent human head target frames with more consistent facial orientations can be selected in the mode, and under the condition that some users cannot shoot facial features although the users face the image acquisition device (for example, the users look at a mobile phone with heads down), whether the current human head target frame is the facial human head target frame or not can be judged accurately, so that wrong judgment is reduced or even avoided.
Optionally, the human head detection algorithm of the embodiment of the present application may be a centret algorithm, and the face recognition algorithm may be a SeetaFace algorithm.
And S104, determining the number of people entering and exiting the target place in any time period according to the facial head target frame and the non-facial head target frame of each target detection area in the time period.
The arbitrary time period may be the first time period or the second time period hereinafter.
Optionally, for each of the facial human head target frame and the non-facial human head target frame, determining whether the human head target frame satisfies the following conditions at the same time within any time period: the human head target frame and a reference mark in the environment image are provided with a superposition part; the merging ratio IoU of the head target frame of each frame and the head target frame of the previous frame is greater than a merging ratio threshold value; the head target frame of each frame is in the specified direction of the head target frame of the previous frame.
If the head target frame meets the conditions in the time period, determining the moving direction of the head target frame as entering or leaving the target place; and determining the number of the persons entering or leaving the target place in the time period according to the number of the head target frames entering or leaving the target place in the time period.
The reference mark in the embodiment of the present application may be set in the environment image according to actual requirements, and in one example, the reference mark may be a straight line a and a straight line B shown in fig. 2.
The intersection ratio threshold in the embodiment of the present application may be set according to actual requirements, for example, 0.5, and if IoU is greater than 0.5, it may be considered that the currently determined head target frame of the frame and the head target frame of the previous frame are the head target frame of the same person; if IoU is less than or greater than 0.5, it is considered that the currently determined frame head target frame and the previous frame head target frame are not the same person head target frame, and the currently determined frame head target frame can be used as a new appearing head target frame to perform the above-mentioned determination again.
The previous frame in the embodiment of the present application may be any frame within a frame number range before the current frame, and the frame number range may be set according to a reference mark, in the example shown in fig. 2, the frame number range may be set according to a distance between a straight line a and a straight line B, if the distance between the straight line a and the straight line B is smaller, the frame number range may be set to a smaller range, and if the distance between the straight line a and the straight line B is larger, the frame number range may be set to a larger range.
The designated direction in the embodiment of the present application may be determined according to an actual situation, taking fig. 2 as an example, when determining whether the moving direction of the human head target frame is the direction of entering the target place, the designated direction is the direction from the straight line a to the straight line B, and if the center of the human head target frame moves from the straight line a to the straight line B when the current frame is compared with the previous frame, the moving direction of the human head target frame is considered as the direction of entering the target place; when determining whether the moving direction of the human head target frame is away from the target place, the specified direction is the direction from the straight line B to the straight line A, and if the center of the human head target frame moves from the straight line B to the straight line A compared with the previous frame in the current frame, the moving direction of the human head target frame is considered to be away from the target place.
In an optional embodiment, when the moving direction of the head target frame is determined to be entering or leaving the target place, the moving direction of the head target frame can be further determined through a target tracking algorithm on the basis of the above manner, so that the statistics of the number of entering people and the number of leaving people are more accurate.
In another alternative embodiment, a target tracking algorithm may be used in place of the above to determine the direction of movement of the human head target box.
Optionally, in the process of determining the moving direction of the human head target frame through the target tracking algorithm, specifically, each human head target frame in the target detection area is input into the target tracking algorithm, different human head target frames have different ID (Identity Document) labels, the same human head target frame has the same continuous ID tracking, and the moving direction of the human head target frame can be determined according to the moving direction of the center of the previous and subsequent frame target frames of the same ID label.
The target tracking algorithm in the embodiment of the application can be a Kalman algorithm or a KCF (Kernel Correlation Filter) algorithm, the accuracy of the target tracking algorithm is high, the accuracy can be guaranteed even in a scene with many people and complicated ambulation, and counting errors cannot occur.
Optionally, as shown in fig. 3, the detection method provided in the embodiment of the present application further includes, on the basis of the steps S101 to S104, the following steps:
and S105, determining the current queuing people number of the target place according to the number of the face head target frames or the number of the people entering or leaving the target place in the first time period.
In an optional embodiment, when the number of the facial head target frames in the target detection area is greater than 0, determining the current number of people in line in the target place according to the number of the facial head target frames, specifically as follows:
h ═ hf + hw expression (1)
In expression (1), h represents the current number of queued persons at the target location, hf represents the number of face head target frames in the target detection area, and hw represents the number of receivable persons in the area between the start position of the target location and the target detection area. The region corresponding to hw is the region which cannot be acquired by the image acquisition device but still has queuing personnel, and the hw is used for making up the acquisition deficiency of the image acquisition device, so that the determined current queuing number is more accurate.
Alternatively, when h determined according to expression (1) is greater than a preset number-of-persons threshold, the number-of-persons threshold is taken as the value of h. The number of people threshold value can be set to the maximum value of the number of people in the shooting area of the image acquisition device and can be set according to the experience value.
In another alternative embodiment, when the number of the face people head target frames in the target detection area is less than or equal to 0, the current queuing people number of the target place is determined according to the number of the people entering and exiting the target place in the first time period.
The time range of the first time period can be set according to actual requirements, and in one example, the first time period can be set to be 20 minutes before the current time, and the difference between the number of entering people and the number of leaving people of the target place within 20 minutes before the current time is used as the current queuing number of people of the target place.
When the face head target frame in the target detection area is not detected currently (for example, when the current team is short and is not arranged to the photographable area or is not arranged currently), the queuing condition of the target place cannot be measured well only according to the detection result of the target detection area, and the current queuing number of people in the target place can be estimated accurately based on the number of people who enter and exit in the first time period before the current time.
Based on the same inventive concept, an embodiment of the present application provides a resource recommendation method, as shown in fig. 4, the resource recommendation method includes:
s401, for a plurality of target places, acquiring the current queuing number of people, the number of people getting in and out in the second time period, place position information and request information of the user terminal of each target place.
The current queuing number is the current queuing number determined by the detection method in the embodiment of the application; the number of people who pass in and out is determined by the detection method in the embodiment of the application; the request information of the user terminal includes user terminal location information.
The second time period may be set according to actual requirements, for example, may be set within 10 minutes before the current time
S402, determining the recommendation score of each target place according to the current queuing number of people in each place, the number of people entering and exiting in the second time period, the place position information and the user position information.
Optionally, determining the resource use speed of each target place according to the number of people entering and exiting each target place in the second time period; determining the distance between the user terminal and each target place according to the position information of the user terminal and the place position information of each target place; and determining the recommendation score of each target place according to the current queuing number of people, the resource using speed and the distance between the user and each target place.
In an alternative embodiment, the resource usage rate of each target site may be expressed in terms of the number of departures per unit time length (e.g., per minute), and in particular, the resource usage rate of the target site may be determined by:
Figure BDA0002667379230000121
in expression (2), v represents the resource usage speed of the target location, t represents the duration of the second time period (which may be assigned according to actual demand), and h [ -t: ] represents the number of departures from the target location in the second time period prior to the present time.
In an alternative embodiment, the recommendation score for each target site may be determined by:
Figure BDA0002667379230000122
in expression (3), score represents the recommended score of the target location, v represents the resource usage speed of the target location, h represents the current number of people queued at the target location, s represents the distance between the user terminal and the target location, and k is a constant (which may be set according to actual needs).
The expression (3) can determine the recommendation score by integrating three factors of the resource using speed v of the target place, the number h of current queuing people and the distance s between the user terminal and the target place, so that the recommendation scheme determined according to the recommendation score is more in line with the actual user requirements.
And S403, determining at least one target place to be recommended to send to the user terminal according to the recommendation score.
Optionally, at least one target place is selected as a target place to be recommended and sent to the user terminal according to the sequence of the recommendation scores from large to small. In one example, according to the sequence of the recommendation scores from large to small, the first three target places are selected as target places to be recommended and sent to the user terminal, so that the purpose of recommending the target places to the user is achieved.
Optionally, as shown in fig. 5, the resource recommendation method provided in the embodiment of the present application, on the basis of the steps S401 to S403, further includes the following steps:
s404, for each target place to be recommended, at least one route from the current position of the user terminal to the current position of the target place is determined according to the place position information and the position information of the user terminal, and route information of the at least one route is sent to the user terminal.
The user can inquire the received route information through the inquiry function of the user terminal and select a route suitable for the user to go to a target place according to the route information.
In an alternative embodiment, the route information of the at least one route transmitted to the user terminal includes: sight information on at least one route. The sight spot information may assist the user in route selection, and if the user is interested in a sight spot on a certain route, a route passing through the sight spot may be selected.
Based on the same inventive concept, an embodiment of the present application provides a detection system, as shown in fig. 6, the detection system includes: image capture device 610 and server 620; image acquisition device 610 is set up in the target site, and image acquisition device 610 and server 620 communication connection.
The image capturing device 610 is configured to: collecting an environment image of a target site, and sending the environment image to a server 620; the server 620 is used for executing the method for detecting the number of people in the target place provided by the embodiment of the application.
Optionally, the image capturing device 610 may be disposed near an entrance of the target location, and the lens may face the outside of the target location to capture an environment image outside the target location, and may also face the inside of the target location to capture an environment image inside the target location, and may capture queued people of the target location, depending on the specific location of the waiting area of the target location; one or more image capture devices 610 may be positioned near the entrance to each target site.
In the example shown in fig. 2, the lens of the image capturing device 610 is directed to the outside of the target site, and in order to be able to capture the tail of a person in line, the image capturing device 610 is disposed near the front line of the block 201 (i.e., near the entrance).
In an optional implementation manner, as shown in fig. 7, the detection system provided in the embodiment of the present application further includes: a user terminal 630; the user terminal 630 is communicatively coupled to the server 620.
The user terminal 630 is configured to: and sending request information to the server 620 in response to the query operation for the target place, and receiving at least one target place to be recommended sent by the server 620.
When receiving an inquiry operation for a target location, the user terminal 630 defaults to allow its own location information to be acquired by the server 620, and when transmitting request information to the server 620, transmits the request information together with its own location information to the server 620.
Optionally, the user terminal 630 is further configured to: receiving route information of at least one route from the current location of the user terminal 630 to the current location of the target place, which is transmitted from the server 620.
Optionally, the user terminal 630 is further configured to: and displaying the received target place and route information.
The user can inquire the required target place and route information in real time through the user terminal 630, and select a suitable target place and route according to the self condition.
The number of the user terminals in the embodiment of the present application may be one or more.
The server 620 in the embodiment of the present application may include: the storage and the processor are electrically connected.
The storage is stored with a computer program, and the computer program is executed by the processor to realize the detection method of the number of people in any target place or any resource recommendation method provided by the embodiment of the application.
Those skilled in the art will appreciate that the servers provided in the embodiments of the present application may be specially designed and manufactured for the required purposes, or may comprise known devices in general-purpose computers. These devices have stored therein computer programs that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., computer) readable medium or in any type of medium suitable for storing electronic instructions and respectively coupled to a bus.
The present application provides, in an alternative embodiment, a server, as shown in fig. 8, the server 620 comprising: memory 621 and processor 622, and memory 621 and processor 622 are electrically connected, such as by bus 623.
Optionally, the memory 621 is used for storing application program codes for implementing the present application, and the processor 622 controls the execution. The processor 622 is configured to execute the application program code stored in the memory 621 to implement any method for detecting the number of people at the target location or any method for recommending resources provided by the embodiment of the present application.
The Memory 621 may be a ROM (Read-Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read-Only Memory) or other optical disk storage, optical disk storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The Processor 622 may be a CPU (Central Processing Unit), a general purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array), or other Programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 622 may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs and microprocessors, and the like.
Bus 623 may include a path that carries information between the aforementioned components. The bus may be a PCI (Peripheral Component Interconnect) bus or an EISA (Extended Industry Standard Architecture) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
Optionally, the server 620 may also include a transceiver 624. The transceiver 624 may be used for the reception and transmission of signals. The transceiver 624 may allow the server 620 to communicate with other devices, wireless or wired, to exchange data. It should be noted that the transceiver 624 is not limited to one in practical applications.
Optionally, the server 620 may further include an input unit 625. The input unit 625 may be used to receive input numeric, character, image and/or sound information or to generate key signal inputs related to user settings and function control of the server 620. The input unit 625 may include, but is not limited to, one or more of a touch screen, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, a camera, a microphone, and the like.
Optionally, the server 620 may further include an output unit 626. The output unit 626 may be used to output or show information processed by the processor 622. The output unit 626 may include, but is not limited to, one or more of a display device, a speaker, a vibration device, and the like.
While fig. 8 illustrates a server 620 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
Based on the same inventive concept, the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the method for detecting the number of people in any target place or any method for recommending resources, provided by the embodiment of the present application.
The computer readable medium includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magneto-optical disks, ROMs, RAMs, EPROMs (Erasable Programmable Read-Only Memory), EEPROMs, flash Memory, magnetic cards, or fiber optic cards. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).
The embodiment of the application provides a computer-readable storage medium suitable for the detection method of the number of people in any target place or any resource recommendation method, which is not described herein again.
By applying the embodiment of the application, at least the following beneficial effects can be realized:
1) the method and the device can determine the target detection area, the facial human head target frame and the non-facial human head target frame in the target detection area based on the environment image of the target place, and realize accurate identification of the human head target frame in the target detection area; based on the number of the facial head target frames and the number of the non-facial head target frames which are accurately identified, the number of the people who enter and exit the target place in any time period can be accurately determined, more accurate reference data are provided for the user to select the target place, the user can conveniently and quickly make a selection which is more suitable for the user, and the selection time and the waste of resources of the user are reduced.
2) When the number of the entering persons and the number of the leaving persons in the target place in any time period are determined based on the number of the face person head target frames and the number of the non-face person head target frames, the moving direction of each person head target frame can be accurately identified according to whether the person head target frames are overlapped with the reference marks or not, the intersection ratio of the front frame person head target frames and the rear frame person head target frames and the relative direction of the front frame person head target frames and the rear frame person head target frames, and then the number of the entering persons and the number of the leaving persons in the target place are determined.
3) The number of the current queuing people in the target place can be accurately determined based on the number of the accurately identified face head target frames; under the condition that the head target frame of the face is not identified, the current queuing number of people in the target place can be accurately estimated based on the number of people who enter and exit the target place in the previous first time period, accurate reference data are provided for the user to select the target place, the user can conveniently and quickly make a selection more suitable for the user, and the waste of selection time and resources of the user is reduced.
4) According to the embodiment of the application, when whether each human head target frame is a facial human head target frame or a non-facial human head target frame is determined, the types of the first number of human head target frames adjacent to the current human head target frame can be referred, so that the types of the current human head target frames are determined to be consistent with the types of the adjacent human head target frames, the error identification which possibly occurs when a certain human head target frame is identified independently is avoided, and the identification result is more accurate.
5) According to the method and the device, the recommendation score of each place can be determined based on the number information (the number of people who queue at present and the number of people who enter and exit in the second time period), the place position information and the user terminal position information of the target place, and the appropriate target place is recommended to the user terminal as a reference according to the recommendation score, so that the user can select the appropriate target place more quickly by combining the self condition, great convenience is provided for the user, and the time of the user is saved.
Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in this application can be interchanged, modified, combined, or eliminated. Further, other steps, measures, or schemes in various operations, methods, or flows that have been discussed in this application can be alternated, altered, rearranged, broken down, combined, or deleted. Further, steps, measures, schemes in the prior art having various operations, methods, procedures disclosed in the present application may also be alternated, modified, rearranged, decomposed, combined, or deleted.
In the description of the present application, it is to be understood that the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless otherwise specified.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. A method for detecting the number of people in a target place is characterized by comprising the following steps:
acquiring an environment image of a target place;
determining a target detection area in the environment image;
determining a facial human head target frame and a non-facial human head target frame in the target detection area;
and determining the number of people entering and exiting the target place in any time period according to the facial head target frame and the non-facial head target frame of each target detection area in the time period.
2. The method according to claim 1, wherein the determining the number of people entering and exiting the target location in any time period according to the target frame of the facial head and the target frame of the non-facial head of each target detection area in the time period comprises:
for each of the facial human head target frame and the non-facial human head target frame, determining whether the human head target frame simultaneously satisfies the following conditions within any time period: the human head target frame and a reference mark in the environment image are provided with a superposition part; the intersection ratio of the human head target frame of each frame and the human head target frame of the previous frame is greater than an intersection ratio threshold value; the human head target frame of each frame is in the appointed direction of the human head target frame of the previous frame;
if the head target frame meets the conditions in the time period, determining the moving direction of the head target frame as entering or leaving the target place;
and determining the number of the entering persons and the number of the leaving persons of the target place in the time period according to the number of the head target frames entering or leaving the target place in the time period.
3. The detection method according to claim 1, further comprising:
and determining the current queuing number of the target place according to the number of the face head target frames or the number of the people entering or exiting the target place in the first time period.
4. The detection method according to any one of claims 1 to 3, wherein the determining of the facial human head target frame and the non-facial human head target frame in the target detection region comprises:
cutting out a sub-environment image containing the target detection area from the environment image;
determining each human head target frame in the sub-environment image according to a human head detection algorithm;
determining a human head target frame positioned in the target detection area range from the human head target frames of the sub-environment images;
and recognizing a facial human head target frame and a non-facial human head target frame in each human head target frame in the target detection area according to a human face recognition algorithm.
5. The detection method according to claim 4, wherein the identifying of the facial head target frame and the non-facial head target frame in each head target frame in the target detection area according to a face recognition algorithm comprises:
for each human head target frame in the target detection area, identifying whether the current human head target frame and a first number of human head target frames adjacent to the current human head target frame are equal-face human head target frames or not according to the face recognition algorithm;
if the current human head target frame and a first number of human head target frames adjacent to the current human head target frame are both facial human head target frames, determining that the current human head target frame is a facial human head target frame;
if the current human head target frame is a facial human head target frame and the first number of human head target frames adjacent to the current human head target frame are all non-facial human head target frames, determining that the current human head target frame is a non-facial human head target frame;
if the current human head target frame and a first number of human head target frames adjacent to the current human head target frame are non-facial human head target frames, determining that the current human head target frame is a non-facial human head target frame;
and if the current human head target frame is a non-facial human head target frame and the first number of human head target frames adjacent to the current human head target frame are all facial human head target frames, determining that the current human head target frame is a facial human head target frame.
6. A resource recommendation method, comprising:
for a plurality of target places, acquiring the current queuing number of people, the number of people who enter and exit in a second time period, place position information and request information of a user terminal of each target place; the current queuing number is the current queuing number determined by the detection method of any one of claims 3-5; the number of people who pass in and out is determined by the detection method of any one of claims 1-5; the request information comprises user terminal position information;
determining a recommended score of each target place according to the current queuing number of people, the number of people who enter and exit in the second time period, the place position information and the user position information of each place;
and determining at least one target place to be recommended to send to the user terminal according to the recommendation score.
7. The resource recommendation method according to claim 6, wherein the determining a recommendation score for each of the target sites according to the current number of people in line for each of the sites, the number of people coming in and going out in the second time period, the site location information, and the user location information comprises:
determining the resource use speed of each target place according to the number of people who enter and exit each target place in the second time period;
determining the distance between the user terminal and each target place according to the position information of the user terminal and the position information of each target place;
and determining the recommendation score of each target place according to the current queuing number of people, the resource using speed and the distance between the user and each target place.
8. The resource recommendation method according to claim 6 or 7, further comprising:
and for each target place to be recommended, determining at least one route from the current position of the user terminal to the current position of the target place according to the place position information and the position information of the user terminal, and sending the route information of the at least one route to the user terminal.
9. A detection system, comprising: an image acquisition device and a server;
the image acquisition device is arranged at a target place and is in communication connection with the server;
the image acquisition device is used for: acquiring an environment image of a target place, and sending the environment image to the server;
the server is used for executing the method for detecting the number of people entering or leaving the target place according to any one of claims 1-5.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the method for detecting the number of people coming in and out of a target place according to any one of claims 1 to 5 or the method for recommending resources according to any one of claims 6 to 8.
CN202010923053.6A 2020-09-04 2020-09-04 Method for detecting number of people in target place, recommendation method, detection system and medium Pending CN112036345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010923053.6A CN112036345A (en) 2020-09-04 2020-09-04 Method for detecting number of people in target place, recommendation method, detection system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010923053.6A CN112036345A (en) 2020-09-04 2020-09-04 Method for detecting number of people in target place, recommendation method, detection system and medium

Publications (1)

Publication Number Publication Date
CN112036345A true CN112036345A (en) 2020-12-04

Family

ID=73590738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010923053.6A Pending CN112036345A (en) 2020-09-04 2020-09-04 Method for detecting number of people in target place, recommendation method, detection system and medium

Country Status (1)

Country Link
CN (1) CN112036345A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112587035A (en) * 2020-12-08 2021-04-02 珠海市一微半导体有限公司 Control method and system for mobile robot to recognize working scene
CN113192048A (en) * 2021-05-17 2021-07-30 广州市勤思网络科技有限公司 Multi-mode fused people number identification and statistics method
CN113269065A (en) * 2021-05-14 2021-08-17 深圳印像数据科技有限公司 Method for counting people flow in front of screen based on target detection algorithm
CN113570626A (en) * 2021-09-27 2021-10-29 腾讯科技(深圳)有限公司 Image cropping method and device, computer equipment and storage medium
CN113995316A (en) * 2021-09-28 2022-02-01 珠海格力电器股份有限公司 Method and device for determining cooking amount, storage medium and electronic equipment
CN117789342A (en) * 2024-02-28 2024-03-29 南方电网调峰调频发电有限公司工程建设管理分公司 Method and device for counting tunnel object access

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112587035A (en) * 2020-12-08 2021-04-02 珠海市一微半导体有限公司 Control method and system for mobile robot to recognize working scene
CN113269065A (en) * 2021-05-14 2021-08-17 深圳印像数据科技有限公司 Method for counting people flow in front of screen based on target detection algorithm
CN113269065B (en) * 2021-05-14 2023-02-28 深圳印像数据科技有限公司 Method for counting people flow in front of screen based on target detection algorithm
CN113192048A (en) * 2021-05-17 2021-07-30 广州市勤思网络科技有限公司 Multi-mode fused people number identification and statistics method
CN113570626A (en) * 2021-09-27 2021-10-29 腾讯科技(深圳)有限公司 Image cropping method and device, computer equipment and storage medium
CN113995316A (en) * 2021-09-28 2022-02-01 珠海格力电器股份有限公司 Method and device for determining cooking amount, storage medium and electronic equipment
CN117789342A (en) * 2024-02-28 2024-03-29 南方电网调峰调频发电有限公司工程建设管理分公司 Method and device for counting tunnel object access

Similar Documents

Publication Publication Date Title
CN112036345A (en) Method for detecting number of people in target place, recommendation method, detection system and medium
EP3779360A1 (en) Indoor positioning method, indoor positioning system, indoor positioning device, and computer readable medium
CN105956518A (en) Face identification method, device and system
CN111160243A (en) Passenger flow volume statistical method and related product
CN110874583A (en) Passenger flow statistics method and device, storage medium and electronic equipment
US10948309B2 (en) Navigation method, shopping cart and navigation system
CN102332091A (en) Camera head and control method thereof, shooting back-up system and individual evaluation method
CN106454277A (en) Image analysis method and device for video monitoring
US20210272314A1 (en) Queuing recommendation method and device, terminal and computer readable storage medium
CN110298268B (en) Method and device for identifying bidirectional passenger flow through single lens, storage medium and camera
AU2020309094B2 (en) Image processing method and apparatus, electronic device, and storage medium
CN112215084A (en) Identification object determination method, device, equipment and storage medium
CN113677409A (en) Treasure hunting game guiding technology
CN111586367A (en) Method, system and terminal equipment for positioning and tracking personnel in space area in real time
CN109753883A (en) Video locating method, device, storage medium and electronic equipment
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN111078751A (en) Method and system for carrying out target statistics based on UNREAL4
CN111753611A (en) Image detection method, device and system, electronic equipment and storage medium
CN112631333B (en) Target tracking method and device of unmanned aerial vehicle and image processing chip
CN112385180A (en) System and method for matching identity and readily available personal identifier information based on transaction time stamp
CN116033544A (en) Indoor parking lot positioning method, computer device, storage medium and program product
CN109714521B (en) Conference site on-site registration platform
KR101598041B1 (en) Traffic line device using camera
CN113313062A (en) Path acquisition method, device, system, electronic equipment and storage medium
RU2712417C1 (en) Method and system for recognizing faces and constructing a route using augmented reality tool

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination