CN113792674B - Method and device for determining empty rate and electronic equipment - Google Patents
Method and device for determining empty rate and electronic equipment Download PDFInfo
- Publication number
- CN113792674B CN113792674B CN202111090916.7A CN202111090916A CN113792674B CN 113792674 B CN113792674 B CN 113792674B CN 202111090916 A CN202111090916 A CN 202111090916A CN 113792674 B CN113792674 B CN 113792674B
- Authority
- CN
- China
- Prior art keywords
- coordinate
- seat
- coordinates
- determining
- ground
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000012545 processing Methods 0.000 claims description 30
- 239000011159 matrix material Substances 0.000 claims description 29
- 238000001514 detection method Methods 0.000 claims description 24
- 230000005764 inhibitory process Effects 0.000 claims description 15
- 230000008859 change Effects 0.000 claims description 14
- 239000011800 void material Substances 0.000 claims description 13
- 230000009466 transformation Effects 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 9
- 230000001629 suppression Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000009434 installation Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 238000010276 construction Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000004880 explosion Methods 0.000 description 2
- 230000014759 maintenance of location Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
In the method for determining the empty space percentage, after the server acquires the image in the target place acquired by the camera, the image is detected, the seat and the human body in the image are acquired, then the first pixel coordinate of the seat is determined, the second pixel coordinate of the human body is determined, the first coordinate in the ground plane coordinate system is determined according to the first pixel coordinate, the second coordinate in the ground plane coordinate system is determined according to the second pixel coordinate, finally the current position number in the target place is determined according to the first coordinate, the current empty space percentage in the target place is determined according to the second coordinate, and therefore the real-time empty space percentage of the target place is determined according to the image shot by the camera arranged in the target place, and further the service efficiency of the target place is optimized.
Description
[ field of technology ]
The embodiment of the specification relates to the technical field of Internet, in particular to a method and a device for determining empty rate and electronic equipment.
[ background Art ]
In the current catering industry, users generally do not know the detailed dining number and the peak time of the passenger flow of nearby shops, can only roughly estimate the fire and explosion degree of each shop through historical experience, then evaluate the fire and explosion degree by combining the current time schedule and/or taste preference and the actual situation after arriving at the shops, and finally select a more satisfactory shop to eat. Thus, it is inevitable that the user waits for a long time after arriving at the store or changes the first store.
In order to solve the problem that users wait for a long time or replace a preferred store after arriving at the store, the real-time empty rates and the peak periods of passenger flow of each store can be sorted according to user preferences and displayed to the users in the form of real-time map data. On one hand, the real-time empty rate can help a user to select a proper shop according to the self needs, and a digitized optimal decision is formed before the user arrives at the shop, rather than a fuzzy decision based on historical experience; on the other hand, for shops, service efficiency can be optimized according to real-time blank rate, and the service efficiency can be combined with take-out and/or coupons and the like, so that peak time is prolonged.
Accordingly, it is desirable to provide a method of determining void fraction to determine real-time void fraction of a store.
[ invention ]
The embodiment of the specification provides a method, a device and electronic equipment for determining the empty rate, so as to determine the real-time empty rate of a target place according to images shot by a camera arranged in the target place, and further optimize the service efficiency of the target place.
In a first aspect, an embodiment of the present disclosure provides a method for determining a free rate, which is applied to a server, where the method for determining a free rate includes: acquiring an image in a target place acquired by a camera; the camera is arranged in the target place; detecting the image to obtain a seat and a human body in the image; determining first pixel coordinates of a contact point of the seat and the ground, and determining second pixel coordinates of a contact point of a foot of the human body and the ground; determining a first coordinate of the contact point of the seat and the ground in a ground plane coordinate system according to the first pixel coordinate, and determining a second coordinate of the contact point of the foot of the human body and the ground in the ground plane coordinate system according to the second pixel coordinate; determining the current seat number in the target place according to the first coordinate, and determining the current user number in the target place according to the second coordinate; and determining the current empty seat rate of the target place according to the seat number and the user number.
In the method for determining the empty rate, after acquiring an image in a target place acquired by a camera, a server detects the image to acquire a seat and a human body in the image, then determines a first pixel coordinate of a contact point between the seat and the ground and a second pixel coordinate of a contact point between a foot of the human body and the ground, determines a first coordinate of a contact point between the seat and the ground in a ground plane coordinate system according to the first pixel coordinate, determines a second coordinate of the contact point between the foot of the human body and the ground in the ground plane coordinate system according to the second pixel coordinate, finally determines a current seat number in the target place according to the first coordinate, determines a current empty rate in the target place according to the current seat number and the current user number in the target place, so that the real-time empty rate of the target place can be determined according to the image shot by the camera arranged in the target place, and the service efficiency of the target place is further optimized; in addition, the method utilizes the existing cameras in the target place to shoot images, a passenger flow meter is not required to be installed at the gate of the target place, the construction difficulty in installation and power supply is reduced, and the applicability is high.
In one possible implementation manner, the determining, according to the first pixel coordinate, a first coordinate of the contact point between the seat and the ground in a ground plane coordinate system, and determining, according to the second pixel coordinate, a second coordinate of the contact point between the foot of the human body and the ground in the ground plane coordinate system includes: extracting coordinates of orthogonal vanishing points in the image by detecting straight line segments in the image; calculating the focal length and the external parameter matrix of the camera according to the coordinates of the orthogonal vanishing points; obtaining a change matrix of the image and the camera coordinate system according to the external parameter matrix and the height of the camera from the ground; and carrying out three-dimensional semantic reconstruction according to the first pixel coordinates and the second pixel coordinates, and the focal length of the camera and the change matrix to determine the first coordinates and the second coordinates.
In one possible implementation manner, the number of the cameras is at least two; after the image in the target place acquired by the camera is acquired, the method further comprises the following steps: extracting feature points from images acquired by the at least two cameras; matching the extracted characteristic points to obtain characteristic point matching pairs; and determining a transformation relation between the at least two cameras according to the characteristic point matching pairs so as to splice the images acquired by the at least two cameras into a scene.
In one possible implementation manner, after the determining, according to the first pixel coordinates, a first coordinate of the contact point between the seat and the ground in the ground plane coordinate system, and determining, according to the second pixel coordinates, a second coordinate of the contact point between the foot of the human body and the ground in the ground plane coordinate system, the method further includes: when at least two first coordinates are obtained through detection in the area where the seat is located, performing non-maximum value inhibition processing on the at least two first coordinates, and reserving the first coordinate with highest built-in confidence in the area where the seat is located as the coordinate of the contact point of the seat and the ground in a ground plane coordinate system; and/or when at least two second coordinates are detected and obtained in the area where the human body is located, performing non-maximum value inhibition processing on the at least two second coordinates, and reserving the second coordinate with the highest built-in confidence in the area where the human body is located as the coordinate of the contact point of the foot of the human body and the ground in a ground plane coordinate system.
In one possible implementation manner, the first coordinates include at least two coordinates of a first seat obtained by detecting the first seat in the continuous N frames of images before the current moment; the determining, according to the first pixel coordinates, a first coordinate of the ground plane coordinate system of the contact point between the seat and the ground, further includes: and carrying out non-maximum value inhibition processing on at least two coordinates of the first seat, and reserving the coordinate with the highest confidence coefficient in the at least two coordinates of the first seat as the coordinate of the contact point of the first seat and the ground in a ground plane coordinate system.
In a second aspect, embodiments of the present disclosure provide a device for determining a free occupancy, including: the acquisition module is used for acquiring images in a target place acquired by the camera; the camera is arranged in the target place; the detection module is used for detecting the image and acquiring seats and human bodies in the image; the coordinate determining module is used for determining first pixel coordinates of the contact point of the seat and the ground and determining second pixel coordinates of the contact point of the foot of the human body and the ground; determining a first coordinate of the contact point of the seat and the ground in a ground plane coordinate system according to the first pixel coordinate, and determining a second coordinate of the contact point of the foot of the human body and the ground in the ground plane coordinate system according to the second pixel coordinate; the empty rate determining module is used for determining the current seat number in the target place according to the first coordinate and determining the current user number in the target place according to the second coordinate; and determining the current empty seat rate of the target place according to the seat number and the user number.
In one possible implementation manner, the coordinate determining module includes: the coordinate extraction submodule is used for extracting the coordinates of the orthogonal vanishing points in the image by detecting the straight line segments in the image; the computing sub-module is used for computing the focal length and the external parameter matrix of the camera according to the coordinates of the orthogonal vanishing points; obtaining a change matrix of the image and the camera coordinate system according to the external parameter matrix and the height of the camera from the ground; and the reconstruction sub-module is used for carrying out three-dimensional semantic reconstruction according to the first pixel coordinate and the second pixel coordinate, as well as the focal length of the camera and the change matrix so as to determine the first coordinate and the second coordinate.
In one possible implementation manner, the apparatus further includes: the extraction module is used for extracting feature points from the images acquired by the at least two cameras after the acquisition module acquires the images in the target place acquired by the cameras when the number of the cameras is at least two; the matching module is used for matching the characteristic points extracted by the extracting module to obtain characteristic point matching pairs; and the scene splicing module is used for determining the transformation relation between the at least two cameras according to the characteristic point matching pair so as to splice the scenes of the images acquired by the at least two cameras.
In one possible implementation manner, the apparatus further includes: the reservation module is used for carrying out non-maximum value inhibition processing on at least two first coordinates when the at least two first coordinates are detected and obtained in the area where the seat is located, and reserving the first coordinate with the highest built-in reliability in the area where the seat is located as the coordinate of the contact point of the seat and the ground in a ground plane coordinate system; and/or when at least two second coordinates are detected and obtained in the area where the human body is located, performing non-maximum value inhibition processing on the at least two second coordinates, and reserving the second coordinate with the highest built-in confidence in the area where the human body is located as the coordinate of the contact point of the foot of the human body and the ground in a ground plane coordinate system.
In one possible implementation manner, the first coordinates include at least two coordinates of a first seat obtained by detecting the first seat in the continuous N frames of images before the current moment; the apparatus further comprises: and the reservation module is used for carrying out non-maximum value inhibition processing on at least two coordinates of the first seat after the coordinate determination module determines the first coordinate of the contact point of the seat and the ground in the ground plane coordinate system according to the first pixel coordinate, and reserving the coordinate with the highest confidence degree in the at least two coordinates of the first seat as the coordinate of the contact point of the first seat and the ground in the ground plane coordinate system.
In a third aspect, embodiments of the present disclosure provide an electronic device, including: at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, the processor invoking the program instructions capable of performing the method provided in the first aspect.
In a fourth aspect, the present description embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method provided in the first aspect.
It should be understood that the second to fourth aspects of the embodiments of the present disclosure are consistent with the technical solutions of the first aspect of the embodiments of the present disclosure, and the beneficial effects obtained by each aspect and the corresponding possible implementation manner are similar and are not repeated.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present description, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for determining a void fraction according to one embodiment of the present disclosure;
FIG. 2 is a schematic illustration of an image within a target site provided in one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an image of a detection seat and a human body provided in one embodiment of the present disclosure;
FIG. 4 is a flowchart of a method for determining a void fraction according to another embodiment of the present disclosure;
FIG. 5 is a schematic illustration of image processing provided in one embodiment of the present disclosure;
FIG. 6 is a flowchart of a method for determining a void fraction according to still another embodiment of the present disclosure;
FIG. 7 is a flowchart of a method for determining a void fraction according to still another embodiment of the present disclosure;
FIG. 8 is a schematic diagram of image processing provided in another embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a device for determining a void fraction according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a device for determining a void fraction according to another embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
[ detailed description ] of the invention
For a better understanding of the technical solutions of the present specification, embodiments of the present specification are described in detail below with reference to the accompanying drawings.
It should be understood that the described embodiments are only some, but not all, of the embodiments of the present description. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present disclosure.
The terminology used in the embodiments of the description presented herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description presented herein. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to solve the problem that users wait for a long time or replace a preferred store after arriving at the store, the real-time empty rates and the peak periods of passenger flow of each store can be sorted according to user preferences and displayed to the users in the form of real-time map data.
Two methods for counting the empty rate are provided in the prior art:
1) Seat code scanning: the current dining number in the store is counted in real time by adopting a code scanning ordering mode, and the current in-store empty seat rate is obtained by combining the total seat number counted manually in advance. In the scheme, the actual dining number is input by service personnel of a store, the accuracy is greatly affected by human factors, and the workload of the service personnel is increased;
2) Passenger flow count: and carrying out passenger flow statistics on the storefront doorway by adopting a passenger flow meter such as an infrared scanner and/or a camera, and combining the total seat number counted manually in advance to obtain the current in-store empty seat rate. The scheme is influenced by the problems of simultaneous entrance/exit, multiple entrances and/or shielding and the like of multiple persons, the statistical accuracy of the number of persons is limited, and the passenger flow meter is generally placed at a doorway and has higher requirements on equipment installation and/or power supply and the like.
The embodiment of the specification provides a method for determining the empty rate, which is used for completing automatic calibration of the pose of a camera by detecting vanishing points of images shot by a camera (for example, a monitoring camera), thereby realizing three-dimensional semantic reconstruction of a seat and a human body. According to indoor layout, non-maximum suppression and point location aggregation are carried out on seats and human bodies, so that the seat detection and human body detection precision is improved, and the real-time empty rate precision is further improved.
According to the method for determining the empty rate, the monitoring camera arranged in the target place can be used for shooting images, according to the images, automatic calibration of the pose of the camera is achieved through Manhattan assumption, and then the deep learning detection algorithm and inverse perspective transformation are combined to carry out three-dimensional reconstruction on indoor semantic information, so that the empty rate in a store is estimated, manual registration is not needed, a passenger flow meter is not needed to be installed at a gate, and the method has greater efficiency advantage and deployment advantage compared with the existing scheme.
Fig. 1 is a flowchart of a method for determining a free rate according to an embodiment of the present disclosure, where the method for determining a free rate may be applied to a server, as shown in fig. 1, and the method for determining a free rate may include:
102, acquiring an image in a target place acquired by a camera; wherein the camera is arranged in the target place.
Specifically, the camera may be a monitoring camera disposed in the target location, so that the server may directly obtain an image collected by the monitoring camera in the target location, where the collected image may be as shown in fig. 2, and fig. 2 is a schematic diagram of the image in the target location provided in one embodiment of the present disclosure.
And 104, detecting the image to acquire the seat and the human body in the image.
Specifically, the seats in the above-described image may be detected by a deep learning method, for example: chair or sofa, etc. The deep learning method for detecting the seat may include: YOLO, single shot multi-box detection (single shot multibox detector, SSD), fast area convolutional neural network (faster region convolutional neural network, master RCNN), and/or shielded area convolutional neural network (mask region convolutional neural network, mask RCNN), among others.
Also, the human body in the above image may be detected by a deep learning method, wherein the deep learning method that may be used to detect the human body may include: YOLO, SSD, master RCNN, and/or mask RCNN, etc.
In specific implementation, taking fig. 2 as an example, after the seat and the human body in the image shown in fig. 2 are detected, an image shown in fig. 3 may be obtained, and fig. 3 is a schematic diagram of the detected seat and human body provided in one embodiment of the present disclosure.
Step 106, determining the first pixel coordinates of the contact point of the seat and the ground, and determining the second pixel coordinates of the contact point of the foot of the human body and the ground.
And 108, determining a first coordinate of a contact point of the seat and the ground in a ground plane coordinate system according to the first pixel coordinate, and determining a second coordinate of a contact point of the foot of the human body and the ground in the ground plane coordinate system according to the second pixel coordinate.
Specifically, the server may determine, according to the first pixel coordinates, the first coordinates of the contact point between the seat and the ground in the ground plane coordinate system through inverse perspective transformation, and determine, according to the second pixel coordinates, the second coordinates of the contact point between the foot of the human body and the ground in the ground plane coordinate system through inverse perspective transformation.
Step 110, determining the current number of seats in the target place according to the first coordinates, and determining the current number of users in the target place according to the second coordinates.
And 112, determining the current empty rate of the target place according to the number of seats and the number of users.
Specifically, a difference value of the number of seats subtracted from the number of users is calculated, the difference value is the number of current empty seats in the target place, and then a quotient of the number of current empty seats divided by the number of seats is calculated, wherein the quotient is the current empty seat rate in the target place.
In the method for determining the empty rate, after acquiring an image in a target place acquired by a camera, a server detects the image to acquire a seat and a human body in the image, then determines a first pixel coordinate of a contact point between the seat and the ground and a second pixel coordinate of a contact point between a foot of the human body and the ground, determines a first coordinate of a contact point between the seat and the ground in a ground plane coordinate system according to the first pixel coordinate, determines a second coordinate of the contact point between the foot of the human body and the ground in the ground plane coordinate system according to the second pixel coordinate, finally determines a current seat number in the target place according to the first coordinate, determines a current empty rate in the target place according to the current seat number and the current user number in the target place, so that the real-time empty rate of the target place can be determined according to the image shot by the camera arranged in the target place, and the service efficiency of the target place is further optimized; in addition, the method utilizes the existing cameras in the target place to shoot images, a passenger flow meter is not required to be installed at the gate of the target place, the construction difficulty in installation and power supply is reduced, and the applicability is high.
Fig. 4 is a flowchart of a method for determining a free occupancy according to another embodiment of the present disclosure, as shown in fig. 4, in the embodiment shown in fig. 1 of the present disclosure, step 108 may include:
step 402, extracting coordinates of an orthogonal vanishing point in the image by detecting a straight line segment in the image.
And step 404, calculating the focal length of the camera and the external parameter matrix according to the coordinates of the orthogonal vanishing points.
Specifically, three orthogonal vanishing point coordinates (u) on the image can be extracted by detecting straight line segments on the image 1 v 1 )、(u 2 v 2 )、(u 3 v 3 ) And resolving the camera focal length and the external parameter matrix according to the vanishing point coordinates.
The calculation mode of the camera focal length f may be as shown in formula (1):
the calculation process of the extrinsic matrix R can be shown in the following formula (2) and formula (3):
R=R 0 /‖R 0 ‖ (3)
in the formulae (1) to (3), (u) 0 v 0 ) Is thatThe pixel coordinates of the center point of the image are half the width and height of the image, respectively.
Step 406, obtaining a change matrix of the image and the camera coordinate system according to the external parameter matrix and the height of the camera from the ground.
Specifically, according to the external parameter matrix obtained in step 404 and the height h of the camera from the ground (typically, the height h is between 2.5m and 3.5m in the room), a change matrix [ r|t ] of the image and the camera coordinate system may be obtained, as shown in formula (4).
And 408, performing three-dimensional semantic reconstruction according to the first pixel coordinate and the second pixel coordinate, the focal length of the camera and the change matrix to determine the first coordinate and the second coordinate.
Specifically, after the focal length of the camera and the change matrix are obtained, the first pixel coordinate and the second pixel coordinate are combined, so that three-dimensional semantic reconstruction can be performed.
For example, assume that the pixel coordinate (first pixel coordinate or second pixel coordinate) of the detection object is (u v), and the ground plane coordinate thereof is P W =[x W y W 0] T The superscript W denotes a horizon coordinate; the calculation process of the three-dimensional semantic reconstruction may be as shown in equation (5).
In the formula (5), (u v) is the pixel coordinates of the detected object in the image; p (P) C Representing three-dimensional coordinates in a camera coordinate system; s represents a scale factor.
The present embodiment describes the processes of camera self-calibration and inverse perspective transformation, and schematic diagrams of image processing in the two processes may be shown in fig. 5, and fig. 5 is a schematic diagram of image processing provided in one embodiment of the present specification.
Fig. 6 is a flowchart of a method for determining a free space percentage according to still another embodiment of the present disclosure, where in the present embodiment, the number of cameras may be at least two; thus, as shown in fig. 6, in the embodiment shown in fig. 1 of the present specification, after step 102, the method may further include:
And step 602, extracting feature points from the images acquired by the at least two cameras.
Step 604, matching the extracted feature points to obtain feature point matching pairs.
Step 606, determining a transformation relationship between at least two cameras according to the feature point matching pairs, so as to perform scene stitching on the images acquired by the at least two cameras.
Specifically, the feature points may be scale-invariant feature transform (SIFT) feature points, the SIFT feature points are extracted from the images acquired by the at least two cameras and matched, multiple sets of two-dimensional (2D) feature point matching pairs are obtained, and the transformation relationship between the at least two cameras is solved from the multiple sets of 2D feature point matching pairs through polar constraint, so as to realize scene stitching of multiple images.
Fig. 7 is a flowchart of a method for determining a free occupancy according to still another embodiment of the present disclosure, as shown in fig. 7, in the embodiment of fig. 1 of the present disclosure, after step 108, the method may further include:
step 702, when at least two first coordinates are detected and obtained in the area where the seat is located, performing non-maximum suppression processing on the at least two first coordinates, and reserving the first coordinate with the highest built-in confidence in the area where the seat is located as the coordinate of the contact point between the seat and the ground in a ground plane coordinate system; and/or when detecting and obtaining at least two second coordinates in the region where the human body is located, performing non-maximum value inhibition processing on the at least two second coordinates, and reserving the second coordinate with highest built-in confidence in the region where the human body is located as the coordinate of the contact point of the foot of the human body and the ground in the ground plane coordinate system.
That is, in a specific implementation, since there may be false detection in the detection algorithm of the seat and the human body, a plurality of seats or human bodies are stacked together, by performing non-maximum suppression on the three-dimensional coordinates reconstructed by each detection target, only one of the detection confidence levels in the range is reserved for the same type of detection target, and the image processing schematic diagram at this time may be as shown in fig. 8, and fig. 8 is a schematic diagram of the image processing provided in another embodiment of the present specification.
Wherein, non-maximum suppression: soft-NMS (non-maximum suppression), an algorithm for removing non-maxima, is commonly used for detection and/or identification algorithms in computer vision.
In addition, in the embodiment shown in fig. 1 of the present specification, the first coordinates may include at least two coordinates of a first seat obtained by detecting the first seat in N consecutive frames of images before the current time; in this way, after determining the first coordinate of the contact point between the seat and the ground in the ground plane coordinate system according to the first pixel coordinate, the server may further perform non-maximum suppression processing on at least two coordinates of the first seat, and reserve the coordinate with the highest confidence degree among the at least two coordinates of the first seat as the coordinate of the contact point between the first seat and the ground in the ground plane coordinate system. In this embodiment, the size of N is not limited, and N may be 10, for example.
That is, in a specific implementation, since there may be missed detection in the detection of the seat, the detection results of the same point in the continuous multi-frame image need to be aggregated, which may be specifically: and reserving all the detected targets which appear in the last 10 frames, if the same target repeatedly appears for a plurality of times, performing non-maximum value inhibition processing on the coordinates of the target in the last 10 frames, and reserving the coordinates with the highest confidence.
In the method for determining the empty rate provided by the embodiment of the specification, firstly, a seat and a human body in an image are detected by using a deep learning algorithm, then three-dimensional reconstruction of the seat and the human body is realized through self-calibration of a camera, inverse perspective transformation and scene splicing, and finally, the accuracy of the empty rate is improved through a post-processing algorithm. On one hand, manual participation is not needed in the whole process, so that the service efficiency of a target place is improved, on the other hand, the existing monitoring camera is used as a sensor part, a passenger flow meter is not needed to be installed at a doorway, the construction difficulty in installation and power supply is reduced, and large-scale deployment is facilitated.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Fig. 9 is a schematic structural diagram of a device for determining an empty rate according to an embodiment of the present disclosure, and as shown in fig. 9, the device for determining an empty rate may include: an acquisition module 91, a detection module 92, a coordinate determination module 93, and an empty rate determination module 94;
the acquiring module 91 is configured to acquire an image in a target location acquired by the camera; wherein the camera is arranged in the target place;
the detection module 92 is configured to detect the image, and obtain a seat and a human body in the image;
a coordinate determining module 93 for determining a first pixel coordinate of a contact point of the seat and the ground, and determining a second pixel coordinate of a contact point of the foot of the human body and the ground; determining a first coordinate of the contact point of the seat and the ground in a ground plane coordinate system according to the first pixel coordinate, and determining a second coordinate of the contact point of the foot of the human body and the ground in the ground plane coordinate system according to the second pixel coordinate;
a blank rate determining module 94, configured to determine a current number of seats in the target location according to a first coordinate, and determine a current number of users in the target location according to a second coordinate; and determining a current empty rate of the target location according to the number of seats and the number of users.
The device for determining the empty rate provided by the embodiment shown in fig. 9 may be used to implement the technical solution of the method embodiment shown in fig. 1 in this specification, and the implementation principle and technical effects may be further described with reference to the related descriptions in the method embodiment.
Fig. 10 is a schematic structural diagram of a device for determining a void fraction according to another embodiment of the present disclosure, and in comparison with the device for determining a void fraction shown in fig. 9, the coordinate determining module 93 in the device for determining a void fraction shown in fig. 10 may include: a coordinate extraction sub-module 931, a calculation sub-module 932, and a reconstruction sub-module 933;
wherein, the coordinate extraction submodule 931 is configured to extract coordinates of an orthogonal vanishing point in the image by detecting a straight line segment in the image;
a calculating sub-module 932 for calculating a focal length and an external parameter matrix of the camera according to coordinates of the orthogonal vanishing points; obtaining a change matrix of the image and the camera coordinate system according to the external parameter matrix and the height of the camera from the ground;
and the reconstruction sub-module 933 is configured to perform three-dimensional semantic reconstruction according to the first pixel coordinate and the second pixel coordinate, and the focal length of the camera and the change matrix, so as to determine the first coordinate and the second coordinate.
Further, the above-mentioned determination device of the empty rate may further include: the extraction module 95, the matching module 96 and the scene splicing module 97;
wherein, when the number of cameras is at least two, the extracting module 95 is configured to extract feature points from the images acquired by the at least two cameras after the acquiring module 91 acquires the images in the target location acquired by the cameras;
a matching module 96, configured to match the feature points extracted by the extracting module 95 to obtain feature point matching pairs;
the scene stitching module 97 is configured to determine a transformation relationship between at least two cameras according to the feature point matching pair, so as to stitch the images acquired by the at least two cameras.
Further, the above-mentioned determination device of the empty rate may further include: a retention module 98;
the reservation module 98 is configured to perform non-maximum suppression processing on at least two first coordinates when at least two first coordinates are detected and obtained in an area where the seat is located, and reserve the first coordinate with the highest confidence in the area where the seat is located as a coordinate of a contact point between the seat and the ground in a ground plane coordinate system; and/or when detecting and obtaining at least two second coordinates in the region where the human body is located, performing non-maximum value inhibition processing on the at least two second coordinates, and reserving the second coordinate with the highest built-in confidence in the region where the human body is located as the coordinate of the contact point of the foot of the human body and the ground in the ground plane coordinate system.
In this embodiment, the first coordinates may include at least two coordinates of a first seat obtained by detecting the first seat in N consecutive frames of images before the current time; further, the above-mentioned determination device of the empty rate may further include: a retention module 98;
the retaining module 98 is configured to perform non-maximum suppression processing on at least two coordinates of the first seat after the coordinate determining module 93 determines, according to the first pixel coordinates, a first coordinate of the contact point between the first seat and the ground in the ground plane coordinate system, and retain a coordinate with the highest confidence level among the at least two coordinates of the first seat as a coordinate of the contact point between the first seat and the ground in the ground plane coordinate system.
The device for determining the empty rate provided by the embodiment shown in fig. 10 may be used to implement the technical solutions of the method embodiments shown in fig. 1 to 8 of the present application, and the implementation principle and technical effects may be further described with reference to the related descriptions in the method embodiments.
FIG. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where the electronic device may include at least one processor as shown in FIG. 11; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, and the processor invokes the program instructions to perform the method for determining the empty rate provided in the embodiments shown in fig. 1 to 8 of the present specification.
The electronic device may be a server, and the server may be disposed in the cloud, which is not limited in the form of the electronic device in this embodiment.
Fig. 11 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present description. The electronic device shown in fig. 11 is only an example, and should not impose any limitation on the functions and scope of use of the embodiments of the present description.
As shown in fig. 11, the electronic device is in the form of a general purpose computing device. Components of an electronic device may include, but are not limited to: one or more processors 410, a communication interface 420, a memory 430, and a communication bus 440 that connects the different components (including the memory 430, the communication interface 420, and the processing unit 410).
The communication bus 440 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, communication buses 440 may include industry standard architecture (industry standard architecture, ISA) buses, micro channel architecture (micro channel architecture, MAC) buses, enhanced ISA buses, video electronics standards association (video electronics standards association, VESA) local bus, and peripheral component interconnect (peripheral component interconnection, PCI) buses.
Electronic devices typically include a variety of computer system readable media. Such media can be any available media that can be accessed by the electronic device and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 430 may include computer system readable media in the form of volatile memory, such as random access memory (random access memory, RAM) and/or cache memory. Memory 430 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the embodiments shown in fig. 1-8 of the present description.
A program/utility having a set (at least one) of program modules may be stored in the memory 430, such program modules including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules generally carry out the functions and/or methods of the embodiments described in fig. 1-8 of the present specification.
The processor 410 executes a program stored in the memory 430 to perform various functional applications and data processing, for example, to implement the method for determining the empty rate provided in the embodiments shown in fig. 1 to 8 of the present specification.
Embodiments of the present disclosure provide a non-transitory computer-readable storage medium storing computer instructions that cause a computer to execute the method for determining a void fraction provided by the embodiments shown in fig. 1 to 8 of the present disclosure.
The non-transitory computer readable storage media described above may employ any combination of one or more computer readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (erasable programmable read only memory, EPROM) or flash memory, an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for the present specification may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (local area network, LAN) or a wide area network (wide area network, WAN), or may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present specification. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present specification, the meaning of "plurality" means at least two, for example, two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present specification in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present specification.
Depending on the context, the word "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to detection". Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should be noted that, the terminals in the embodiments of the present disclosure may include, but are not limited to, a personal computer (personal computer, PC), a personal digital assistant (personal digital assistant, PDA), a wireless handheld device, a tablet computer (tablet computer), a mobile phone, an MP3 player, an MP4 player, and the like.
In the several embodiments provided in this specification, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the elements is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in each embodiment of the present specification may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform part of the steps of the methods described in the embodiments of the present specification. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a read-only memory (ROM), a random-access memory (RAM), a magnetic disk, or an optical disk, etc.
The foregoing description of the preferred embodiments is provided for the purpose of illustration only, and is not intended to limit the scope of the disclosure, since any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the disclosure are intended to be included within the scope of the disclosure.
Claims (10)
1. A method for determining the empty rate comprises the following steps:
acquiring an image in a target place acquired by a camera; the camera is arranged in the target place;
detecting the image to obtain a seat and a human body in the image;
determining first pixel coordinates of a contact point of the seat and the ground, and determining second pixel coordinates of a contact point of a foot of the human body and the ground;
Determining a first coordinate of the contact point of the seat and the ground in a ground plane coordinate system according to the first pixel coordinate, and determining a second coordinate of the contact point of the foot of the human body and the ground in the ground plane coordinate system according to the second pixel coordinate;
determining the current seat number in the target place according to the first coordinate, and determining the current user number in the target place according to the second coordinate;
determining the current empty seat rate of the target place according to the seat number and the user number;
wherein, according to the first pixel coordinates, determining the first coordinates of the contact point between the seat and the ground in the ground plane coordinate system, and according to the second pixel coordinates, determining the second coordinates of the contact point between the foot of the human body and the ground in the ground plane coordinate system, further comprises:
when at least two first coordinates are obtained through detection in the area where the seat is located, performing non-maximum value inhibition processing on the at least two first coordinates, and reserving the first coordinate with highest built-in confidence in the area where the seat is located as the coordinate of the contact point of the seat and the ground in a ground plane coordinate system; and/or the number of the groups of groups,
when at least two second coordinates are obtained through detection in the area where the human body is located, performing non-maximum value inhibition processing on the at least two second coordinates, and reserving the second coordinate with the highest built-in confidence in the area where the human body is located as the coordinate of the contact point of the foot of the human body and the ground in a ground plane coordinate system.
2. The method of claim 1, wherein the determining a first coordinate of the seat-to-ground contact point in a ground plane coordinate system based on the first pixel coordinates, and determining a second coordinate of the human foot-to-ground contact point in a ground plane coordinate system based on the second pixel coordinates comprises:
extracting coordinates of orthogonal vanishing points in the image by detecting straight line segments in the image;
calculating the focal length and the external parameter matrix of the camera according to the coordinates of the orthogonal vanishing points;
obtaining a change matrix of the image and the camera coordinate system according to the external parameter matrix and the height of the camera from the ground;
and carrying out three-dimensional semantic reconstruction according to the first pixel coordinates and the second pixel coordinates, and the focal length of the camera and the change matrix to determine the first coordinates and the second coordinates.
3. The method of claim 1 or 2, wherein the number of cameras is at least two; after the image in the target place acquired by the camera is acquired, the method further comprises the following steps:
extracting feature points from images acquired by the at least two cameras;
Matching the extracted characteristic points to obtain characteristic point matching pairs;
and determining a transformation relation between the at least two cameras according to the characteristic point matching pairs so as to splice the images acquired by the at least two cameras into a scene.
4. The method of claim 1, wherein the first coordinates include at least two coordinates of a first seat obtained by detecting the first seat in consecutive N frames of images before a current time;
the determining, according to the first pixel coordinates, a first coordinate of the ground plane coordinate system of the contact point between the seat and the ground, further includes:
and carrying out non-maximum value inhibition processing on at least two coordinates of the first seat, and reserving the coordinate with the highest confidence coefficient in the at least two coordinates of the first seat as the coordinate of the contact point of the first seat and the ground in a ground plane coordinate system.
5. A device for determining a void fraction, comprising:
the acquisition module is used for acquiring images in a target place acquired by the camera; the camera is arranged in the target place;
the detection module is used for detecting the image and acquiring seats and human bodies in the image;
The coordinate determining module is used for determining first pixel coordinates of the contact point of the seat and the ground and determining second pixel coordinates of the contact point of the foot of the human body and the ground; determining a first coordinate of the contact point of the seat and the ground in a ground plane coordinate system according to the first pixel coordinate, and determining a second coordinate of the contact point of the foot of the human body and the ground in the ground plane coordinate system according to the second pixel coordinate;
the empty rate determining module is used for determining the current seat number in the target place according to the first coordinate and determining the current user number in the target place according to the second coordinate; determining the current empty seat rate of the target place according to the seat number and the user number;
wherein, the device for determining the empty rate further comprises:
the reservation module is used for carrying out non-maximum value inhibition processing on at least two first coordinates when the at least two first coordinates are detected and obtained in the area where the seat is located, and reserving the first coordinate with the highest built-in reliability in the area where the seat is located as the coordinate of the contact point of the seat and the ground in a ground plane coordinate system; and/or when at least two second coordinates are detected and obtained in the area where the human body is located, performing non-maximum value inhibition processing on the at least two second coordinates, and reserving the second coordinate with the highest built-in confidence in the area where the human body is located as the coordinate of the contact point of the foot of the human body and the ground in a ground plane coordinate system.
6. The apparatus of claim 5, wherein the coordinate determination module comprises:
the coordinate extraction submodule is used for extracting the coordinates of the orthogonal vanishing points in the image by detecting the straight line segments in the image;
the computing sub-module is used for computing the focal length and the external parameter matrix of the camera according to the coordinates of the orthogonal vanishing points; obtaining a change matrix of the image and the camera coordinate system according to the external parameter matrix and the height of the camera from the ground;
and the reconstruction sub-module is used for carrying out three-dimensional semantic reconstruction according to the first pixel coordinate and the second pixel coordinate, as well as the focal length of the camera and the change matrix so as to determine the first coordinate and the second coordinate.
7. The apparatus of claim 5 or 6, further comprising:
the extraction module is used for extracting feature points from the images acquired by the at least two cameras after the acquisition module acquires the images in the target place acquired by the cameras when the number of the cameras is at least two;
the matching module is used for matching the characteristic points extracted by the extracting module to obtain characteristic point matching pairs;
And the scene splicing module is used for determining the transformation relation between the at least two cameras according to the characteristic point matching pair so as to splice the scenes of the images acquired by the at least two cameras.
8. The apparatus of claim 5, wherein the first coordinates comprise at least two coordinates of a first seat obtained by detecting the first seat in consecutive N frames of images before a current time;
the apparatus further comprises:
and the reservation module is used for carrying out non-maximum value inhibition processing on at least two coordinates of the first seat after the coordinate determination module determines the first coordinate of the contact point of the seat and the ground in the ground plane coordinate system according to the first pixel coordinate, and reserving the coordinate with the highest confidence degree in the at least two coordinates of the first seat as the coordinate of the contact point of the first seat and the ground in the ground plane coordinate system.
9. An electronic device, comprising:
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1-4.
10. A non-transitory computer readable storage medium storing computer instructions that cause the computer to perform the method of any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111090916.7A CN113792674B (en) | 2021-09-17 | 2021-09-17 | Method and device for determining empty rate and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111090916.7A CN113792674B (en) | 2021-09-17 | 2021-09-17 | Method and device for determining empty rate and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113792674A CN113792674A (en) | 2021-12-14 |
CN113792674B true CN113792674B (en) | 2024-03-26 |
Family
ID=79183860
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111090916.7A Active CN113792674B (en) | 2021-09-17 | 2021-09-17 | Method and device for determining empty rate and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113792674B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116388668B (en) * | 2023-03-30 | 2024-03-12 | 兰州理工大学 | Photovoltaic module cleaning robot with straddle travelling mechanism and cleaning method |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104217206A (en) * | 2013-05-31 | 2014-12-17 | 上海亚视信息科技有限公司 | Real-time attendance counting method based on high-definition videos |
CN105550663A (en) * | 2016-01-07 | 2016-05-04 | 北京环境特性研究所 | Cinema attendance statistical method and system |
CN107122698A (en) * | 2016-07-19 | 2017-09-01 | 安徽大学 | A kind of real-time attendance statistical method of cinema based on convolutional neural networks |
CN109300159A (en) * | 2018-09-07 | 2019-02-01 | 百度在线网络技术(北京)有限公司 | Method for detecting position, device, equipment, storage medium and vehicle |
CN110941984A (en) * | 2019-09-25 | 2020-03-31 | 西南科技大学 | Study room seat state detection method and seat management system based on deep learning |
CN111241993A (en) * | 2020-01-08 | 2020-06-05 | 咪咕文化科技有限公司 | Seat number determination method and device, electronic equipment and storage medium |
JP6788710B1 (en) * | 2019-08-01 | 2020-11-25 | エコモット株式会社 | Image output device and image output method |
CN112017246A (en) * | 2019-05-28 | 2020-12-01 | 北京地平线机器人技术研发有限公司 | Image acquisition method and device based on inverse perspective transformation |
CN112288853A (en) * | 2020-10-29 | 2021-01-29 | 字节跳动有限公司 | Three-dimensional reconstruction method, three-dimensional reconstruction device, and storage medium |
WO2021026705A1 (en) * | 2019-08-09 | 2021-02-18 | 华为技术有限公司 | Matching relationship determination method, re-projection error calculation method and related apparatus |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9934442B2 (en) * | 2013-10-09 | 2018-04-03 | Nec Corporation | Passenger counting device, passenger counting method, and program recording medium |
-
2021
- 2021-09-17 CN CN202111090916.7A patent/CN113792674B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104217206A (en) * | 2013-05-31 | 2014-12-17 | 上海亚视信息科技有限公司 | Real-time attendance counting method based on high-definition videos |
CN105550663A (en) * | 2016-01-07 | 2016-05-04 | 北京环境特性研究所 | Cinema attendance statistical method and system |
CN107122698A (en) * | 2016-07-19 | 2017-09-01 | 安徽大学 | A kind of real-time attendance statistical method of cinema based on convolutional neural networks |
CN109300159A (en) * | 2018-09-07 | 2019-02-01 | 百度在线网络技术(北京)有限公司 | Method for detecting position, device, equipment, storage medium and vehicle |
CN112017246A (en) * | 2019-05-28 | 2020-12-01 | 北京地平线机器人技术研发有限公司 | Image acquisition method and device based on inverse perspective transformation |
JP6788710B1 (en) * | 2019-08-01 | 2020-11-25 | エコモット株式会社 | Image output device and image output method |
WO2021026705A1 (en) * | 2019-08-09 | 2021-02-18 | 华为技术有限公司 | Matching relationship determination method, re-projection error calculation method and related apparatus |
CN110941984A (en) * | 2019-09-25 | 2020-03-31 | 西南科技大学 | Study room seat state detection method and seat management system based on deep learning |
CN111241993A (en) * | 2020-01-08 | 2020-06-05 | 咪咕文化科技有限公司 | Seat number determination method and device, electronic equipment and storage medium |
CN112288853A (en) * | 2020-10-29 | 2021-01-29 | 字节跳动有限公司 | Three-dimensional reconstruction method, three-dimensional reconstruction device, and storage medium |
Non-Patent Citations (1)
Title |
---|
基于座位管理系统数据的图书馆读者自习行为探究;储文静;储昭辉;许晓云;;图书馆研究;20170930(05);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113792674A (en) | 2021-12-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109255352B (en) | Target detection method, device and system | |
CN108876791B (en) | Image processing method, device and system and storage medium | |
CN108256404B (en) | Pedestrian detection method and device | |
AU2018379393B2 (en) | Monitoring systems, and computer implemented methods for processing data in monitoring systems, programmed to enable identification and tracking of human targets in crowded environments | |
CN110276411A (en) | Image classification method, device, equipment, storage medium and medical treatment electronic equipment | |
JPWO2018047687A1 (en) | Three-dimensional model generation device and three-dimensional model generation method | |
CN107925755A (en) | The method and system of plane surface detection is carried out for image procossing | |
EP3190581B1 (en) | Interior map establishment device and method using cloud point | |
US8903139B2 (en) | Method of reconstructing three-dimensional facial shape | |
CN109308490A (en) | Method and apparatus for generating information | |
CN109978753B (en) | Method and device for drawing panoramic thermodynamic diagram | |
CN108876804A (en) | It scratches as model training and image are scratched as methods, devices and systems and storage medium | |
CN110555876B (en) | Method and apparatus for determining position | |
CN110619807B (en) | Method and device for generating global thermodynamic diagram | |
WO2022237026A1 (en) | Plane information detection method and system | |
CN108492284B (en) | Method and apparatus for determining perspective shape of image | |
CN111932681A (en) | House information display method and device and electronic equipment | |
CN112153320B (en) | Method and device for measuring size of article, electronic equipment and storage medium | |
CN113792674B (en) | Method and device for determining empty rate and electronic equipment | |
CN111753870B (en) | Training method, device and storage medium of target detection model | |
CN112308018A (en) | Image identification method, system, electronic equipment and storage medium | |
CN110657760B (en) | Method and device for measuring space area based on artificial intelligence and storage medium | |
CN104508706B (en) | Feature extraction method, program and system | |
US11657611B2 (en) | Methods and systems for augmented reality room identification based on room-object profile data | |
CN108446737B (en) | Method and device for identifying objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |