CN113542670B - Detection method, detection device and detection system - Google Patents

Detection method, detection device and detection system Download PDF

Info

Publication number
CN113542670B
CN113542670B CN202110208988.0A CN202110208988A CN113542670B CN 113542670 B CN113542670 B CN 113542670B CN 202110208988 A CN202110208988 A CN 202110208988A CN 113542670 B CN113542670 B CN 113542670B
Authority
CN
China
Prior art keywords
camera
user
ultrasonic radar
area
shelf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110208988.0A
Other languages
Chinese (zh)
Other versions
CN113542670A (en
Inventor
张晶
庄艺唐
陈云凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hanshi Information Technology Co ltd
Original Assignee
Shanghai Hanshi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hanshi Information Technology Co ltd filed Critical Shanghai Hanshi Information Technology Co ltd
Priority to CN202110208988.0A priority Critical patent/CN113542670B/en
Publication of CN113542670A publication Critical patent/CN113542670A/en
Application granted granted Critical
Publication of CN113542670B publication Critical patent/CN113542670B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The application discloses real-time detection method, detection device and detection system, belongs to the technical field of digital goods shelves, and the detection system is arranged opposite to the goods shelves and comprises an intelligent camera, the intelligent camera comprises a camera and an ultrasonic radar arranged in parallel, and the detection method comprises the following steps: monitoring whether a person moves in front of the goods shelf through an ultrasonic radar; under the condition that a user exists in front of the shelf, determining a shielding area of the user in a shot image of the camera according to the preset position relation among the camera, the ultrasonic radar and the shelf and the point cloud position of the user determined by the ultrasonic radar; the processor processes the captured image except for the blocked area to determine merchandise display information and out-of-stock information on the shelves. The shielding object and the commodity can be prevented from being mixed together, misjudgment can be avoided during processing, and the accuracy of detecting the commodity display information and the out-of-stock information on the goods shelf is improved.

Description

Detection method, detection device and detection system
Technical Field
The application belongs to the technical field of digital goods shelves, and particularly relates to a detection method, a detection device and a detection system.
Background
In recent years, intelligent new retail closely related to life of people is rapidly developed, technologies such as the internet, the internet of things, big data and artificial intelligence are applied to digital and intelligent management of business supermarkets, convenience stores and the like, meanwhile, the relation among commodities, users and payment is optimized, and customers are provided with faster, better and more convenient shopping experience.
Digital shelving or digitizing traditional shelving is an important link in intelligent new retail business. The requirements of intelligent detection of arrangement and intelligent alarm of shortage of goods endow new requirements of intelligent management of digital shelves, meanwhile, under the background of an era that 5G comes, shelf information is managed remotely, efficiently and timely through a cloud end, accurate positioning, navigation and display track tracking analysis are carried out on all goods, the requirements are inevitable for a new retail technology, the shelves are required to be digitalized, and the shelves are digitalized for a large number of existing traditional shelves, so that the digitalized information of shelf goods can be obtained by shooting the arrangement of the shelves through a camera and processing through computer vision or an AI algorithm.
In the process of implementing the present application, the inventor finds that at least the following problems currently exist in the prior art: the goods shelf is frequently blocked by people, and when the commodity display information and the out-of-stock information are detected through an algorithm, due to the fact that the blocking objects are mixed with the commodities, misjudgment is prone to occur.
Disclosure of Invention
The embodiment of the application aims to provide a detection method, a detection device and a detection system, which can solve the technical problem that misjudgment is easy to occur due to the fact that a blocking object is mixed with a commodity when the existing goods shelf is frequently blocked by people and commodity display information and out-of-stock information are detected through an algorithm.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a detection method, which is applied to a detection system, where the detection system is arranged opposite to a shelf, the detection system includes an intelligent camera, the intelligent camera includes a camera and an ultrasonic radar arranged in parallel, and the method includes:
monitoring whether a person moves in front of the goods shelf through the ultrasonic radar;
under the condition that a user exists in front of the shelf, determining an occlusion area of the user in a shot image of the camera according to the preset position relation among the camera, the ultrasonic radar and the shelf and the point cloud position of the user determined by the ultrasonic radar;
processing, by the processor, an area of the captured image other than the obscured area to determine merchandise display information and out-of-stock information on the shelf.
Further, after monitoring whether a person moves in front of the shelf through the ultrasonic radar, the method further comprises the following steps:
adding 1 to the current number of people and adding 1 to the total number of people after the user enters the monitoring area of the ultrasonic radar;
after the user leaves the monitoring area of the ultrasonic radar, the current number of people is reduced by 1.
Further, the determining, according to the preset position relationship among the camera, the ultrasonic radar and the shelf and the point cloud position of the user determined by the ultrasonic radar, the blocking area of the user in the shot image of the camera specifically includes:
determining a transverse pixel coordinate value of the user in a shooting picture according to a distance d between the camera and the ultrasonic radar, a point cloud position coordinate (x, y) of the user, a FOV angle beta of the camera and a resolution WxH of an image shot by the camera, wherein W is a transverse resolution, and H is a longitudinal resolution, and the transverse pixel coordinate value is W/2+ | d-x |. Times.W/(tan (beta/2) × 2 |);
and determining the occlusion area of the user in the shot image of the camera according to the transverse pixel coordinate value.
Further, the determining, according to the transverse pixel coordinate value, an occlusion region of the user in the captured image of the camera specifically includes:
taking the transverse pixel coordinate value as a center, and taking a rectangular area with a preset width and a preset height as a shielding area of the user in a shot image of the camera; alternatively, the first and second electrodes may be,
and taking the horizontal pixel coordinate value as a center, and taking a preset human-shaped area as a shielding area of the user in a shot image of the camera.
In a second aspect, an embodiment of the present application provides a detection apparatus, which is applied to a detection system, the detection system and the goods shelf are arranged relatively, the detection system includes an intelligent camera, the intelligent camera includes a camera and an ultrasonic radar arranged in parallel, the apparatus includes:
the detection module is used for monitoring whether a person moves in front of the goods shelf through the ultrasonic radar;
the determining module is used for determining an occlusion area of a user in a shot image of the camera according to the preset position relation among the camera, the ultrasonic radar and the goods shelf and the point cloud position of the user determined by the ultrasonic radar under the condition that the user exists in front of the goods shelf;
and the processing module is used for processing the area except the shielding area in the shot image through the processor so as to determine the commodity display information and the out-of-stock information on the shelf.
Further, the apparatus further comprises:
the first calculation module is used for adding 1 to the current number of people and adding 1 to the total number of people after a user enters a monitoring area of the ultrasonic radar;
and the second calculation module is used for subtracting 1 from the current number of people after the user leaves the monitoring area of the ultrasonic radar.
Further, the determining module specifically includes:
a coordinate determination submodule, configured to determine a horizontal pixel coordinate value of the user in a captured picture according to a distance d between the camera and the ultrasonic radar, a point cloud position coordinate (x, y) of the user, a FOV angle β of the camera, and a resolution W × H of an image captured by the camera, where W is a horizontal resolution and H is a vertical resolution, and the horizontal pixel coordinate value is W/2+ | d-x | × W/(tan (β/2) × 2 |) y;
and the region determining submodule is used for determining an occlusion region of the user in the shot image of the camera according to the transverse pixel coordinate value.
Further, the region determining submodule is specifically configured to use the horizontal pixel coordinate value as a center, and use a rectangular region with a preset width and a preset height as a blocking region of the user in a shot image of the camera; alternatively, the first and second liquid crystal display panels may be,
and taking the transverse pixel coordinate value as a center, and taking a preset human-shaped area as a shielding area of the user in a shot image of the camera.
In a third aspect, an embodiment of the present application provides a detection system, including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the detection method according to the first aspect.
In the embodiment of the application, a shielding area of a user in a shot image of a camera is determined according to the camera, the preset position relation between the ultrasonic radar and a shelf and the point cloud position of the user determined by the ultrasonic radar; and then, only the area except the shielding area in the shot image is processed, and the shielding object is prevented from being mixed with the commodity, so that misjudgment during processing is avoided, and the accuracy of detecting the commodity display information and the out-of-stock information on the goods shelf is improved.
Drawings
FIG. 1 is a schematic flow chart of a processing method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an actual installation of a detection system provided in an embodiment of the present application;
fig. 3 is a schematic top-view point cloud diagram of an ultrasonic radar according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a smart camera detection provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a detection apparatus according to an embodiment of the present application.
The implementation, functional features and advantages of the present invention will be further explained with reference to the accompanying drawings.
Description of the reference numerals
401-ultrasonic radar, 402-camera, 403-shelf, 50-detection device, 501-detection module, 502-determination module, 5021-coordinate determination submodule, 5022-area determination submodule, 503-processing module, 504-first calculation module and 505-second calculation module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application are capable of operation in sequences other than those illustrated or described herein, and that the terms "first," "second," etc. are generally used in a generic sense and do not limit the number of terms, e.g., a first term can be one or more than one.
The following describes the photographing apparatus provided in the embodiments of the present application in detail through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
Example one
Referring to fig. 1, a schematic flow chart of a detection method provided in the embodiment of the present application is shown, and is applied to a detection system.
Referring to fig. 2, a schematic diagram of an actual installation of a detection system provided by an embodiment of the present application is shown, in the schematic diagram, the detection system is arranged opposite to a shelf, the detection system includes an intelligent camera, and the intelligent camera includes a camera and an ultrasonic radar arranged in parallel.
The processing method comprises the following steps:
s101: whether a person moves in front of the goods shelf is monitored through an ultrasonic radar.
It should be noted that the ultrasonic radar can accurately identify the moving objects and the number of the moving objects within the coverage area by using the doppler effect.
Specifically, the point cloud data of all the moving objects can be obtained, and each moving object in the coverage area can be accurately tracked.
Referring to fig. 3, a schematic view point cloud diagram of an ultrasonic radar according to an embodiment of the present application is shown.
When moving, each person generates ultrasonic frequency shift at different positions, and the ultrasonic radar marks the positions of different persons by detecting different frequency shifts. As shown, the frequency points encircled by each small circle represent one person, and therefore, fig. 3 represents that 4 persons exist in the monitored area through the monitoring of the ultrasonic radar.
S102: under the condition that a user exists in front of the shelf, determining a shielding area of the user in a shot image of the camera according to the preset position relation among the camera, the ultrasonic radar and the shelf and the point cloud position of the user determined by the ultrasonic radar.
Further, S102 may be completed by S1021 and S1022.
S1021: according to the distance d between the camera and the ultrasonic radar, the point cloud position coordinates (x, y) of the user, the FOV angle beta of the camera and the resolution WXH of the image shot by the camera, wherein W is the transverse resolution, H is the longitudinal resolution, so as to determine the transverse pixel coordinate value of the user on the shot picture, and the transverse pixel coordinate value is W/2+ and can be | d-x |. W/(tan (beta/2) | 2 | y |).
Referring to fig. 4, a schematic diagram of detection of an intelligent camera provided in an embodiment of the present application is shown.
In fig. 4, R is the position of the ultrasonic radar 401, which is the origin position in the radar coordinate system, and coordinates (0, 0). Point C is the lens position of the camera 402, and the distance between R and C according to the illustrated structure is d, then the coordinates of point C in the radar coordinate system are (d, 0). When the smart camera is installed, the positions of the yellow shelf 403 and the camera 402 are fixed, that is, the distance Z between the camera 402 and the shelf 403 is known. There is a human body P between the shelf 403 and the camera 402, and the coordinate position obtained in the radar coordinate system at a certain photographing time is (x, y), so that the point is the position where the image is formed in the photographing picture of the camera 402, that is, the point P' of the position where the straight line passing through the point C and the point P intersects the shelf 403. And the intersection point C' of the optical axis of the camera and the shelf 403, that is, the center point of the pixel in the picture of the camera 402, can be calculated according to the distance from the point C to the shelf 403. By calculating the distance d 'between P' and C 'and then converting the distance d' to the actual physical size corresponding to the pixel of the image of the shelf 403 from the camera 402, the pixel position of the point P imaged in the image of the camera 402 can be calculated.
The specific calculation process is as follows: the coordinate position of point C' in the radar coordinate system is: (d, Z). Assuming that the angle between the segments CP and CC is α, the angle between the segments CP' and CC is also α. A segment d' = tan (α) × Z. Assume that the point Cx is an intersection of a vertical line between the point P and the line segment CC'.
Then the coordinates of Cx in the radar coordinate system are (d, y), then the length of segment PCx is | d-x |, and the length of segment CCx is | y |. And according to the trigonometric function relationship, the following formula is obtained: tan (α) = | d-x |/| y |. Therefore, d' = | d-x |/| y | × Z.
Further, let the fov angle of the camera 402 be β, the resolution be W × H, W be the horizontal resolution, i.e., the horizontal maximum number of pixels, and H be the vertical resolution, i.e., the vertical maximum number of pixels.
At this time, in the case of the distance Z from the camera 402 to the shelf 403. The physical size of the radar coordinate system corresponding to the pixel width of the transverse W/2 is as follows: tan (β/2) × Z, that is, the physical size width corresponding to each pixel is: tan (. Beta./2). Times.ZX.2/W.
Therefore, the number of pixels corresponding to the line segment d' is: d '/(tan (β/2) × Z × 2/W) = d' × W/(tan (β/2) × Z × 2).
From the above, if the picture imaged by the camera 402 has a pixel coordinate system with the left side of the top left corner vertex being 0, the lateral direction being positive to the right, and the longitudinal direction being positive. Then the horizontal pixel coordinates of the P point imaged in the picture are:
W/2+d’×W/(tan(β/2)×Z×2)=W/2+|d-x|/|y|×Z×W/(tan(β/2)×Z×2)=W/2+|d-x|×W/(tan(β/2)×2×|y|)。
s1022: the occlusion region of the user in the captured image of the camera 402 is determined from the lateral pixel coordinate values.
Specifically, a rectangular region with a preset width and a preset height may be used as a shielding region of the user in the captured image of the camera 402, with the horizontal pixel coordinate value as a center; alternatively, the first and second electrodes may be,
and taking the transverse pixel coordinate value as a center, and taking the preset human-shaped area as a shielding area of the user in a shot image of the camera.
It should be noted that a uniform preset value can be approximately selected for the height of the human body of the user, and therefore, the relative position of the user in front of the shelf 403 can be represented by calculating the horizontal pixel coordinate, and a preset occlusion area can be determined in turn to represent the actual imaging of the user in the captured image.
S103: the processor processes the captured image except for the blocked area to determine merchandise display information and out-of-stock information on the shelves.
In the embodiment of the application, an occlusion area of a user in a shot image of a camera is determined according to a preset position relation among the camera 402, an ultrasonic radar 401 and a shelf 403 and a point cloud position of the user determined by the ultrasonic radar; then, only the area other than the blocking area in the captured image is processed to avoid the blocking object from being mixed with the product, thereby avoiding erroneous judgment during processing and improving the accuracy of detecting the product display information and the out-of-stock information on the shelf 403.
Further, after S121, the method may further include:
s104: after a user enters a monitoring area of the ultrasonic radar, adding 1 to the current number of people and adding 1 to the total number of people;
s105: after there is a user leaving the monitoring area of the ultrasonic radar, the current number of people is decremented by 1.
Whether people appear and leave in front of the monitored goods shelf or not can be rapidly and accurately detected through the ultrasonic radar.
Further, when the current number of people is judged to be larger than 0, the user in front of the shelf can be determined, and the process directly enters S102.
Further, the headcount is the accumulated headcount appearing in front of the shelf, and can be used for passenger flow statistics.
Example two
Referring to fig. 5, which shows a schematic structural diagram of a detection apparatus provided in an embodiment of the present application, the detection apparatus 50 is applied to a detection system, the detection system is disposed opposite to a shelf, the detection system includes an intelligent camera, the intelligent camera includes a camera and an ultrasonic radar disposed in parallel, and the apparatus 50 includes:
the detection module 501 is used for monitoring whether a person moves in front of the shelf through an ultrasonic radar;
the determining module 502 is configured to determine, when a user exists in front of the shelf, an occlusion area of the user in a captured image of the camera according to a preset position relationship between the camera and the shelf, a preset position relationship between the ultrasonic radar and the shelf, and a point cloud position of the user determined by the ultrasonic radar;
and a processing module 503, configured to process, by the processor, an area other than the blocking area in the captured image to determine the merchandise display information and the out-of-stock information on the shelf.
Further, the detecting device 50 further includes:
the first calculation module 504 is configured to add 1 to the current number of people and add 1 to the total number of people after the user enters the monitoring area of the ultrasonic radar;
and a second calculating module 505, configured to subtract 1 from the current number of people after the user leaves the monitoring area of the ultrasonic radar.
Further, the determining module 502 specifically includes:
the coordinate determination submodule 5021 is used for determining a transverse pixel coordinate value of the user in a shot picture according to the distance d between the camera and the ultrasonic radar, the point cloud position coordinates (x, y) of the user, the FOV angle beta of the camera and the resolution WxH of an image shot by the camera, wherein W is a transverse resolution, H is a longitudinal resolution, and the transverse pixel coordinate value is W/2+ | d-x | × W/(tan (beta/2) × 2 |);
the region determining submodule 5022 is used for determining an occlusion region of the user in a shot image of the camera according to the transverse pixel coordinate value.
Further, the region determining submodule 5022 is specifically configured to take the transverse pixel coordinate value as a center, and take a rectangular region with a preset width and a preset height as a shielding region of a user in a shot image of the camera; alternatively, the first and second electrodes may be,
and taking the transverse pixel coordinate value as a center, and taking the preset human-shaped area as a shielding area of the user in a shot image of the camera.
The detection apparatus 50 provided in this embodiment of the application can implement each process implemented in the foregoing method embodiments, and is not described here again to avoid repetition.
In the embodiment of the application, a shielding area of a user in a shot image of a camera is determined according to the preset position relation between the camera and a shelf and the preset position relation between an ultrasonic radar and the shelf and the point cloud position of the user determined by the ultrasonic radar; and then, only the area except the shielding area in the shot image is processed, and the shielding object is prevented from being mixed with the commodity, so that misjudgment during processing is avoided, and the accuracy of detecting the commodity display information and the out-of-stock information on the goods shelf is improved.
The virtual device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal.
EXAMPLE III
The embodiment of the present application provides a detection system, which includes a processor, a memory, and a program or an instruction stored on the memory and executable on the processor, and when executed by the processor, the program or the instruction implements the steps of the detection method according to the first embodiment. And the same technical effect can be achieved, and in order to avoid repetition, the description is omitted.
In the embodiment of the application, a shielding area of a user in a shot image of a camera is determined according to the camera, the preset position relation between the ultrasonic radar and a shelf and the point cloud position of the user determined by the ultrasonic radar; and then, only the area except the shielding area in the shot image is processed, so that the shielding object is prevented from being mixed with the commodity, the misjudgment during processing is avoided, and the accuracy of detecting the commodity display information and the goods shortage information on the goods shelf is improved.
The above description is only an example of the present invention and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (7)

1. A detection method is applied to a detection system, and is characterized in that the detection system is arranged opposite to a goods shelf, the detection system comprises an intelligent camera, the intelligent camera comprises a camera and an ultrasonic radar arranged in parallel, and the method comprises the following steps:
monitoring whether a person moves in front of the goods shelf through the ultrasonic radar;
under the condition that a user exists in front of the goods shelf, determining an occlusion area of the user in a shot image of the camera according to the preset position relation among the camera, the ultrasonic radar and the goods shelf and the point cloud position of the user determined by the ultrasonic radar;
processing, by a processor, an area of the captured image other than the obscured area to determine merchandise display information and out-of-stock information on the shelf;
the method comprises the following steps of determining a shielding area of a user in a shot image of the camera according to the preset position relation among the camera, the ultrasonic radar and the goods shelf and the point cloud position of the user determined by the ultrasonic radar, and specifically comprises the following steps:
determining a transverse pixel coordinate value of the user in a shot picture according to the distance d between the camera and the ultrasonic radar, the point cloud position coordinate (x, y) of the user, the FOV angle beta of the camera and the resolution WxH of an image shot by the camera, wherein W is a transverse resolution, and H is a longitudinal resolution, and the transverse pixel coordinate value is W/2+ | d-x |. Times W/(tan (beta/2) × 2 |) is determined;
and determining the occlusion area of the user in the shot image of the camera according to the transverse pixel coordinate value.
2. The method of claim 1, wherein after monitoring whether a person is moving in front of the shelf by the ultrasonic radar, further comprising:
adding 1 to the current number of people and adding 1 to the total number of people after the user enters the monitoring area of the ultrasonic radar;
after the user leaves the monitoring area of the ultrasonic radar, the current number of people is reduced by 1.
3. The method according to claim 1, wherein the determining an occlusion region of the user in the captured image of the camera according to the lateral pixel coordinate value specifically includes:
taking the transverse pixel coordinate value as a center, and taking a rectangular area with a preset width and a preset height as a shielding area of the user in a shot image of the camera; alternatively, the first and second electrodes may be,
and taking the horizontal pixel coordinate value as a center, and taking a preset human-shaped area as a shielding area of the user in a shot image of the camera.
4. The utility model provides a detection device, is applied to detecting system, its characterized in that, detecting system sets up with goods shelves relatively, detecting system includes intelligent camera, intelligent camera includes camera and the ultrasonic radar who sets up side by side, the device includes:
the detection module is used for monitoring whether a person moves in front of the goods shelf through the ultrasonic radar;
the determining module is used for determining an occlusion area of a user in a shot image of the camera according to the preset position relation among the camera, the ultrasonic radar and the goods shelf and the point cloud position of the user determined by the ultrasonic radar under the condition that the user exists in front of the goods shelf;
the processing module is used for processing the area except the shielding area in the shot image through a processor so as to determine the commodity display information and the out-of-stock information on the shelf;
the determining module specifically includes:
a coordinate determination submodule, configured to determine a horizontal pixel coordinate value of the user in a captured picture according to a distance d between the camera and the ultrasonic radar, a point cloud position coordinate (x, y) of the user, a FOV angle β of the camera, and a resolution W × H of an image captured by the camera, where W is a horizontal resolution and H is a vertical resolution, and the horizontal pixel coordinate value is W/2+ | d-x | × W/(tan (β/2) × 2 |) y |;
and the region determining submodule is used for determining an occlusion region of the user in the shot image of the camera according to the transverse pixel coordinate value.
5. The apparatus of claim 4, further comprising:
the first calculation module is used for adding 1 to the current number of people and adding 1 to the total number of people after a user enters a monitoring area of the ultrasonic radar;
and the second calculation module is used for subtracting 1 from the current number of people after the user leaves the monitoring area of the ultrasonic radar.
6. The apparatus according to claim 4, wherein the region determining submodule is configured to use a rectangular region with a preset width and a preset height as an occlusion region of the user in the captured image of the camera, with the horizontal pixel coordinate value as a center; alternatively, the first and second electrodes may be,
and taking the transverse pixel coordinate value as a center, and taking a preset human-shaped area as a shielding area of the user in a shot image of the camera.
7. A detection system comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the detection method according to any one of claims 1 to 3.
CN202110208988.0A 2021-02-24 2021-02-24 Detection method, detection device and detection system Active CN113542670B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110208988.0A CN113542670B (en) 2021-02-24 2021-02-24 Detection method, detection device and detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110208988.0A CN113542670B (en) 2021-02-24 2021-02-24 Detection method, detection device and detection system

Publications (2)

Publication Number Publication Date
CN113542670A CN113542670A (en) 2021-10-22
CN113542670B true CN113542670B (en) 2023-04-18

Family

ID=78094412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110208988.0A Active CN113542670B (en) 2021-02-24 2021-02-24 Detection method, detection device and detection system

Country Status (1)

Country Link
CN (1) CN113542670B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007304479A (en) * 2006-05-15 2007-11-22 Necディスプレイソリューションズ株式会社 Video display device, and display method by the same
WO2007131385A1 (en) * 2006-05-12 2007-11-22 Shanghai Yaowei Industry Co, Ltd. Calculating instrument for counting the people coming in and going out
CN202533803U (en) * 2012-02-11 2012-11-14 陶重犇 Mobile robot object tracking platform equipped with network camera
CN104599251A (en) * 2015-01-28 2015-05-06 武汉大学 Repair method and system for true orthophoto absolutely-blocked region
CN104865578A (en) * 2015-05-12 2015-08-26 上海交通大学 Indoor parking lot high-precision map generation device and method
CN109951636A (en) * 2019-03-18 2019-06-28 Oppo广东移动通信有限公司 It takes pictures processing method, device, mobile terminal and storage medium
CN110775028A (en) * 2019-10-29 2020-02-11 长安大学 System and method for detecting automobile windshield shelters and assisting in driving

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7949568B2 (en) * 2007-08-31 2011-05-24 Accenture Global Services Limited Determination of product display parameters based on image processing
US8189855B2 (en) * 2007-08-31 2012-05-29 Accenture Global Services Limited Planogram extraction based on image processing
JP6484956B2 (en) * 2014-08-18 2019-03-20 富士通株式会社 Display status management method, display status management program, and information processing apparatus
CN109040539B (en) * 2018-07-10 2020-12-01 京东方科技集团股份有限公司 Image acquisition device, goods shelf and image identification method
CN109523691B (en) * 2018-12-10 2024-03-26 深圳市思拓通信系统有限公司 Unmanned supermarket shelf monitoring device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007131385A1 (en) * 2006-05-12 2007-11-22 Shanghai Yaowei Industry Co, Ltd. Calculating instrument for counting the people coming in and going out
JP2007304479A (en) * 2006-05-15 2007-11-22 Necディスプレイソリューションズ株式会社 Video display device, and display method by the same
CN202533803U (en) * 2012-02-11 2012-11-14 陶重犇 Mobile robot object tracking platform equipped with network camera
CN104599251A (en) * 2015-01-28 2015-05-06 武汉大学 Repair method and system for true orthophoto absolutely-blocked region
CN104865578A (en) * 2015-05-12 2015-08-26 上海交通大学 Indoor parking lot high-precision map generation device and method
CN109951636A (en) * 2019-03-18 2019-06-28 Oppo广东移动通信有限公司 It takes pictures processing method, device, mobile terminal and storage medium
CN110775028A (en) * 2019-10-29 2020-02-11 长安大学 System and method for detecting automobile windshield shelters and assisting in driving

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Himangshu Kalita等.Dynamics and Control of a Hopping Robot for Extreme Environment Exploration on the Moon and Mars.《2020 IEEE Aerospace Conference》.2020,全文. *
汤姆·登顿.《自动驾驶与辅助驾驶系统》.机械工业出版社,2021,第49-52页. *
赵翔 ; 杨明 ; 王春香 ; 王冰 ; .基于视觉和毫米波雷达的车道级定位方法.上海交通大学学报.2018,(第01期),全文. *

Also Published As

Publication number Publication date
CN113542670A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN109840504B (en) Article taking and placing behavior identification method and device, storage medium and equipment
US10180326B2 (en) Staying state analysis device, staying state analysis system and staying state analysis method
US10212324B2 (en) Position detection device, position detection method, and storage medium
US9514541B2 (en) Image processing apparatus and image processing method
US11341740B2 (en) Object identification method and object identification device
US20150169954A1 (en) Image processing to derive movement characteristics for a plurality of queue objects
US20120027299A1 (en) Method and system for audience digital monitoring
US20120243733A1 (en) Moving object detecting device, moving object detecting method, moving object detection program, moving object tracking device, moving object tracking method, and moving object tracking program
CN102057348A (en) Multiple pointer ambiguity and occlusion resolution
CN108647587B (en) People counting method, device, terminal and storage medium
CN110189381B (en) External parameter calibration system, method, terminal and readable storage medium
KR101051389B1 (en) Adaptive background-based object detection and tracking device and method
US20150278588A1 (en) Person counting device, person counting system, and person counting method
Wang et al. Automatic node selection and target tracking in wireless camera sensor networks
CN112215142B (en) Method, device and equipment for detecting goods shelf stock shortage rate based on depth image information
CN113542670B (en) Detection method, detection device and detection system
CN110850974A (en) Method and system for detecting intention interest point
Micheloni et al. Real-time image processing for active monitoring of wide areas
US20230410523A1 (en) Information processing apparatus, control method, and program
US8351653B2 (en) Distance estimation from image motion for moving obstacle detection
CN116088503A (en) Dynamic obstacle detection method and robot
Palaio et al. Ground plane velocity estimation embedding rectification on a particle filter multi-target tracking
CN112489240B (en) Commodity display inspection method, inspection robot and storage medium
US11954924B2 (en) System and method for determining information about objects using multiple sensors
CN109063675A (en) Vehicle density calculation method, system, terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant