CN112489240B - Commodity display inspection method, inspection robot and storage medium - Google Patents

Commodity display inspection method, inspection robot and storage medium Download PDF

Info

Publication number
CN112489240B
CN112489240B CN202011241285.XA CN202011241285A CN112489240B CN 112489240 B CN112489240 B CN 112489240B CN 202011241285 A CN202011241285 A CN 202011241285A CN 112489240 B CN112489240 B CN 112489240B
Authority
CN
China
Prior art keywords
position information
inspection
information
distance
price tag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011241285.XA
Other languages
Chinese (zh)
Other versions
CN112489240A (en
Inventor
张晶
庄艺唐
李汪佩
金小平
周旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hanshi Information Technology Co ltd
Original Assignee
Shanghai Hanshi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hanshi Information Technology Co ltd filed Critical Shanghai Hanshi Information Technology Co ltd
Priority to CN202011241285.XA priority Critical patent/CN112489240B/en
Publication of CN112489240A publication Critical patent/CN112489240A/en
Application granted granted Critical
Publication of CN112489240B publication Critical patent/CN112489240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/20Checking timed patrols, e.g. of watchman
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders

Landscapes

  • Business, Economics & Management (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Quality & Reliability (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention is suitable for the technical field of intelligent robots, and provides a commodity display inspection method, which comprises the following steps: moving according to preset polling information, and acquiring a polling image of a shelf corresponding to a preset polling shooting position when detecting that the shelf is located at the preset polling shooting position currently; identifying a target commodity, a price tag, a commodity code and the stock shortage information of the target commodity from the inspection image, and acquiring first position information of the price tag in the inspection image; determining spatial position information of the price tag in the display space according to a preset position information conversion strategy and the first position information; and sending the out-of-stock information and the spatial position information to a server for triggering the server to generate commodity display information. According to the method, the related information is acquired through the polling of the polling robot, the space coordinate information of each goods shelf and the price tags is generated, the intelligent and digital requirements of goods shelf shortage detection are met, and the scheme is simple in deployment, rapid in calculation, high in accuracy and good in applicability.

Description

Commodity display inspection method, inspection robot and storage medium
Technical Field
The application belongs to the technical field of intelligent business super technologies, and particularly relates to a commodity display inspection method, an inspection robot and a storage medium.
Background
In recent years, intelligent new retail which is closely related to life of people is rapidly developed, a digital shelf is an important link in intelligent new retail business, and the commodity shelf needs to be checked in a checking mode when commodity management is carried out. The existing checking method for commodity shelf mainly comprises two methods: one is to check the data through a manual tray and manually enter the database; the other method is that a camera fixed on each shelf is used for shooting the shelf, commodity images are collected, and the checking is realized by identifying the images. The first mode needs a lot of manpower, and the position information of the goods shelf and the goods cannot be accurately acquired through the manpower; the second mode needs to accurately set the distance and the position between the camera and the goods shelf and set a specific commodity price tag to finish the plate inspection, and has no wide applicability.
Disclosure of Invention
The embodiment of the application provides a commodity display inspection method and an inspection robot, and can solve the problems that the existing commodity shelf tray inspection consumes manpower and does not have wide applicability.
In a first aspect, an embodiment of the present application provides a method for inspecting a merchandise display, including:
moving according to preset polling information, and acquiring a polling image of a shelf corresponding to a preset polling shooting position when detecting that the shelf is located at the preset polling shooting position currently;
identifying a target commodity, a price tag and a commodity code thereof and the stock shortage information of the target commodity from the inspection image, and acquiring first position information of the price tag in the inspection image;
determining spatial position information of the price tag in the display space according to a preset position information conversion strategy and the first position information;
and sending the out-of-stock information and the spatial position information to a server for triggering the server to generate commodity display information.
Further, the spatial position information includes horizontal position information, longitudinal position information, and vertical position information;
the determining the spatial position information of the price tag in the display space according to the preset position information conversion strategy and the first position information comprises the following steps:
acquiring camera calibration external parameter, center position information and progressive step length coefficient of the inspection robot; the center position information is position information of a center point, and the center point is a point projected in the inspection image by a camera center point of a camera for shooting the inspection image;
calculating horizontal position information and longitudinal position information of the price tag according to the first position information, the center position information, the camera calibration external parameter, the progressive step length coefficient and a first pixel density coefficient;
and calculating the vertical position information of the price tag according to the first position information, the camera calibration external parameter and the second pixel density coefficient.
Further, the calculating horizontal position information and longitudinal position information of the price tag according to the first position information, the center position information, the camera calibration external parameter, the progressive step size coefficient and the first pixel density coefficient includes:
determining second position information corresponding to the central point on the robot map according to the central position information, the camera calibration external parameter and the progressive step length coefficient;
and calculating the horizontal position information and the longitudinal position information according to the second position information, the first position information and a first pixel density coefficient.
Further, the determining, according to the center position information, the camera calibration external parameter, and the step-by-step coefficient, second position information corresponding to the center point on the robot map includes:
according to the center position information, the camera calibration external parameter and the progressive step length coefficient, a first position information corresponding relation is obtained; the first position information corresponding relation comprises a corresponding relation between the central position information and the position information of the chassis central point of the inspection robot;
and calculating second position information corresponding to the central point on the robot map according to the corresponding relation of the first position information.
Further, the camera calibration external parameter comprises a first distance between a chassis center point of the inspection robot and the depth camera;
the corresponding relation of the first position information according to the center position information, the camera calibration external parameter and the progressive step length coefficient comprises the following steps:
determining a third distance between the high-definition camera and the shelf according to the first distance and a second distance between the depth camera and the shelf;
and obtaining a corresponding relation of the first position information according to the third distance, the center position information and the progressive step length coefficient.
Further, the calculating the horizontal position information and the longitudinal position information according to the second position information, the first position information and a first pixel density coefficient includes:
calculating a fourth distance between the center point of the inspection image and a target point to be calculated according to the first pixel density coefficient;
and calculating the horizontal position information and the longitudinal position information of the target point to be calculated according to the second position information, the first position information and the fourth distance.
Further, the calculating the vertical position information of the price tag according to the first position information, the camera calibration external reference parameter and the second pixel density coefficient includes:
acquiring a fifth distance corresponding to the central point; the fifth distance is the vertical height of the horizontal plane corresponding to the central point;
calculating a sixth distance between the central point and a target point to be calculated in the vertical direction according to the second pixel density coefficient;
and calculating to obtain the vertical position information of the target price tag according to the fifth distance and the sixth distance.
Further, before the determining the spatial location information of the price tag in the display space according to the preset location information conversion strategy and the first location information, the method further includes:
and carrying out duplication elimination processing on the price tag.
In a second aspect, an embodiment of the present application provides an inspection robot, including:
the first processing unit is used for moving according to preset polling information and acquiring a polling image of the goods shelf corresponding to a preset polling shooting position when the current position is detected to be at the preset polling shooting position;
the second processing unit is used for identifying a target commodity, a price tag and a commodity code thereof and the stock shortage information of the target commodity from the patrol image and acquiring first position information of the price tag in the patrol image;
the third processing unit is used for determining the spatial position information of the price tag in the display space according to a preset position information conversion strategy and the first position information;
and the sending unit is used for sending the out-of-stock information and the spatial position information to a server and triggering the server to generate commodity display information.
Further, the spatial position information includes horizontal position information, longitudinal position information, and vertical position information;
the third processing unit is specifically configured to:
acquiring camera calibration external parameter, center position information and progressive step length coefficient of the inspection robot; the center position information is position information of a center point, and the center point is a point projected in the inspection image by a camera center point of a camera for shooting the inspection image;
calculating horizontal position information and longitudinal position information of the price tag according to the first position information, the center position information, the camera calibration external parameter, the progressive step length coefficient and a first pixel density coefficient;
and calculating the vertical position information of the price tag according to the first position information, the camera calibration external parameter and the second pixel density coefficient.
The third processing unit is specifically configured to:
determining second position information corresponding to the central point on the robot map according to the central position information, the camera calibration external parameter and the progressive step length coefficient;
and calculating the horizontal position information and the longitudinal position information according to the second position information, the first position information and a first pixel density coefficient.
The third processing unit is specifically configured to:
according to the center position information, the camera calibration external parameter and the progressive step length coefficient, a first position information corresponding relation is obtained; the first position information corresponding relation comprises a corresponding relation between the central position information and the position information of the chassis central point of the inspection robot;
and calculating second position information corresponding to the central point on the robot map according to the corresponding relation of the first position information.
Further, the camera calibration external parameter comprises a first distance between a chassis center point of the inspection robot and the depth camera;
the third processing unit is specifically configured to:
determining a third distance between the high-definition camera and the shelf according to the first distance and a second distance between the depth camera and the shelf;
and obtaining a corresponding relation of the first position information according to the third distance, the center position information and the progressive step length coefficient.
The third processing unit is specifically configured to:
calculating a fourth distance between the center point of the inspection image and a target point to be calculated according to the first pixel density coefficient;
and calculating the horizontal position information and the longitudinal position information of the target point to be calculated according to the second position information, the first position information and the fourth distance.
The third processing unit is specifically configured to:
acquiring a fifth distance corresponding to the central point; the fifth distance is the vertical height of the horizontal plane corresponding to the central point;
calculating a sixth distance between the central point and a target point to be calculated in the vertical direction according to the second pixel density coefficient;
and calculating to obtain the vertical position information of the target price tag according to the fifth distance and the sixth distance.
Further, patrol and examine robot still includes:
and the fourth processing unit is used for carrying out duplication elimination processing on the price tag.
In a third aspect, an embodiment of the present application provides an inspection robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the merchandise display inspection method according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the merchandise display inspection method according to the first aspect.
In the embodiment of the application, the mobile terminal moves according to the preset patrol inspection information, and when the current position at the preset patrol inspection shooting position is detected, the patrol inspection image of the shelf corresponding to the preset patrol inspection shooting position is obtained; identifying a target commodity, a price tag, a commodity code and the stock shortage information of the target commodity from the inspection image, and acquiring first position information of the price tag in the inspection image; determining spatial position information of the price tag in the display space according to a preset position information conversion strategy and the first position information; and sending the out-of-stock information and the spatial position information to a server for triggering the server to generate commodity display information. According to the method, the related information is acquired through the polling of the polling robot, the space coordinate information of each goods shelf and the price tag is generated, the intelligent and digital requirements of goods shelf shortage detection are met, the technical scheme is simple in deployment, rapid in calculation, high in accuracy and good in applicability, and the digital modeling requirements of all traditional goods shelves can be well met.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a merchandise display inspection method according to a first embodiment of the present application;
fig. 2 is a schematic diagram of a polling scene of a polling robot in a merchandise display polling method according to a first embodiment of the application;
fig. 3 is a schematic flowchart of a thinning S103 in a merchandise display inspection method according to a first embodiment of the present application;
fig. 4 is a schematic diagram of a center point in a merchandise display inspection method according to a first embodiment of the present application;
fig. 5 is a schematic diagram illustrating horizontal positions of positional relationships between a robot and a shelf in a merchandise display inspection method according to a first embodiment of the present application;
fig. 6 is a schematic view of the inspection robot in the merchandise display inspection method according to the first embodiment of the present application, which photographs at each preset inspection photographing position in steps along the shelf from the starting point;
fig. 7 is a schematic flowchart of a refinement of S1032 in a merchandise display inspection method according to the first embodiment of the present application;
fig. 8 is a schematic flowchart of a refinement of S10321 in a merchandise display inspection method according to a first embodiment of the present application;
fig. 9 is a schematic view of an inspection robot according to a second embodiment of the present application;
fig. 10 is a schematic view of an inspection robot according to a third embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In recent years, intelligent new retail which is closely related to life of people is rapidly developed, technologies such as the internet, the internet of things, big data and artificial intelligence are used for realizing digital and intelligent management of business supermarkets, convenience stores and the like, meanwhile, the relation between commodities, users and payment is optimized, and customers are provided with faster, better and more convenient shopping experience.
Digital shelving or digitizing traditional shelving is an important link in intelligent new retail business. The requirements of arrangement intelligent detection, shortage intelligent alarm and the like endow the digital shelf with new requirements of intelligent management, meanwhile, under the background of an era that 5G comes, shelf information is managed remotely, efficiently and timely through a cloud end, accurate positioning, navigation and display track tracking analysis are carried out on all goods, the requirement of a new retail technology is inevitable, the shelf is required to be digitalized, and a great number of existing traditional shelves are in a twin of figures, so that the problem is great and needs to be solved.
Aiming at the requirements, the traditional means is to manually record data by disk inspection and manually input the data into a database. However, this method is time consuming and highly error prone, and the labor cost is quite high in some countries or regions. Aiming at the pain point of the processing scheme, the patent provides a commodity display inspection method, and in the embodiment, based on an inspection robot and a method for identifying commodity price tags and converting all price tag positions into space coordinates, the intelligent inspection technical scheme is used for generating corresponding digital twin information for commodities and shelves. This scheme can utilize the meticulous accuracy of robot of patrolling and examining to the goods shelves row face carry out the display detection of price tag and commodity. The device is particularly suitable for the large-scale business inspection work with large area and multiple shelves.
Referring to fig. 1, fig. 1 is a schematic flow chart of a merchandise display inspection method according to a first embodiment of the present application. The execution main body of the commodity display inspection method in the embodiment is an inspection robot. The merchandise display inspection method as shown in fig. 1 may include:
s101: and moving according to the preset polling information, and acquiring a polling image of the shelf corresponding to the preset polling shooting position when detecting that the shelf is located at the preset polling shooting position currently.
In the present embodiment, an inspection robot with a motion chassis is used, and a plurality of high-definition cameras are mounted on the inspection robot and used for capturing inspection images of a shelf. The number of the high-definition cameras is determined according to the field angle of the high-definition cameras, and the fact that the picture of the high-definition cameras can cover the whole shelf is guaranteed.
Before the inspection robot inspects the workpiece, the workpiece needs to be deployed in a work survey to obtain preset inspection information. The deployment of the industrial survey may be performed by the robot in this implementation. When the work exploration is deployed, the environment of a field needs to be scanned, a map is constructed, and a shelf to be inspected is obtained. And presetting a routing inspection route according to the constructed map, and setting a photographing starting point position and a photographing end point position of the routing inspection robot in each shelf channel. The selection standard of the starting position and the end position is that a high-definition camera of the inspection robot is about 50cm away from a shelf price tag, and edges at two ends of the shelf are aligned to the photographing center of the inspection robot as far as possible. After the starting position and the end position of each shelf are set, the step length shot along the shelf and the left and right relative positions of the shelf in the starting direction are set as shown in fig. 2, and fig. 2 is a schematic diagram of the inspection robot in the inspection process. For the goods shelf A, the left side of the advancing direction of the inspection robot is arranged, so that the left and right relative positions required to be set for deploying the goods shelf A in the process of work exploration are arranged on the left; since the rack B is located on the right side of the traveling direction of the inspection robot, the left-right relative position of the rack B to be set at the time of the work survey is set to the right. And each specific shooting point position and the left and right edge positions of the shelf can be calculated according to the information and recorded locally. The obtained information belongs to preset inspection information, and the preset inspection information is read to inspect the shelf needing inspection when the inspection robot actually inspects.
The inspection robot starts to inspect, moves according to preset inspection information, and when the inspection robot detects that the inspection robot is located at a preset inspection shooting position, the inspection robot controls the camera to shoot so as to acquire an inspection image of a shelf corresponding to the preset inspection shooting position.
S102: identifying a target commodity, a price tag and a commodity code thereof and the stock shortage information of the target commodity from the inspection image, and acquiring first position information of the price tag in the inspection image.
The inspection robot stores an intelligent recognition algorithm in advance, the intelligent recognition algorithm is used for recognizing inspection images, and the intelligent recognition algorithm can be a trained neural network model and the like, and the intelligent recognition algorithm is not limited here. After the inspection robot acquires the inspection image of the shelf corresponding to the preset inspection position, the inspection image can be identified through an intelligent identification algorithm, and the target commodity and the commodity code thereof, the price tag of the target commodity, the first position information of the price tag in the inspection image and the shortage information of the target commodity in the inspection image are identified. The first position information of the price tag in the patrol inspection image can be two-dimensional pixel coordinates.
The inspection robot moves according to the preset inspection information, acquires the inspection image of the shelf corresponding to the position when moving to the preset inspection position, and moves to the next preset inspection position after shooting the inspection image. The inspection robot identifies each inspection image simultaneously to obtain the target commodity and the commodity code thereof in each shelf, the price tag of the target commodity and the first position information of the price tag in the inspection image.
S103: and determining the spatial position information of the price tag in the display space according to a preset position information conversion strategy and the first position information.
The inspection robot stores a preset position information conversion strategy, and the preset position information conversion strategy is used for converting two-dimensional position information of the price tag in the inspection image into space position information in the display space. And the inspection robot converts the first position information into space position information according to a preset position information conversion strategy.
The spatial position information includes horizontal position information, longitudinal position information, and vertical position information, and how to determine the horizontal position information, the longitudinal position information, and the vertical position information in the spatial position information is described in detail below. S103 may include S1031 to S1033, as shown in fig. 3, where S1031 to S1033 are specifically as follows:
s1031: and acquiring a camera calibration external parameter, center position information and a progressive step length coefficient of the inspection robot.
The inspection robot acquires camera calibration external parameters of the inspection robot, the camera calibration external parameters are preset camera related parameters and can include the distance between a chassis center point of the inspection robot and a depth camera carried on the inspection robot, the spatial relative position relationship between the depth camera and a high-definition camera, the actual assembly height of each high-definition camera and the like. The spatial relative position relationship between the depth camera and the high-definition camera is a rotational-translation matrix of the transformation between the two cameras.
The inspection robot acquires central position information, the central position information is position information of a central point, and the central point is a point in the inspection image, wherein the central point is used for shooting a camera central point projection of the inspection image. As shown in fig. 4 and 5, fig. 4 is a schematic view of a center point, which is a front view of a picture taken by a camera, and fig. 5 is a schematic view of a horizontal position of a positional relationship between a robot and a shelf. A pixel point which projects on the shelf by the central axis of the camera horizontal to the ground is named as a point C, the coordinate of the point C is central position information, and the central point in the graph 5 is the point C.
The abscissa of the point C is determined according to the pixel width of the high-definition camera, and the abscissa of the point C can be calculated by dividing the horizontal pixel width of the high-definition camera by 2 because the point C is the central point. The ordinate of the point C corresponds to the physical height of the high definition camera, and it can be understood that the ordinate of the point C is unchanged no matter how far the inspection robot is from the shelf. Let the coordinate of point C be (x)c,yc) Then xc=Wp2; wherein, WpIs the lateral pixel width of a high definition camera.
And the inspection robot acquires the progressive step length coefficient. As shown in fig. 6, fig. 6 is a schematic diagram of the inspection robot photographing at each preset inspection photographing position in steps along the shelf from the start point. The coordinates of the starting point are (x)s,ys,rs) The next point is (x)1,y1,r1) And so on for the following. x and y are the x-axis and y-axis, respectively, on the robot mapAnd r is an angle value of the direction of the front of the inspection robot. The length of the line segment between the two chassis is the step length of each movement, for example, the step length is 20cm, and the length of the line segment is 20cm, and the pixel value converted into the robot map is 20 ÷ 5 ÷ 4. And deltax and deltay are the absolute values of the components of the step size on the x-axis and y-axis, respectively, and are indicated by dashed lines since the components on the x-axis in the above figure are in the negative direction. So (x)1,y1,r1) And (x)s,ys,rs) Satisfies the following relationship:
x1=xs–Δx
y1=ys+Δy
r1=rs
the following fixed relationship is satisfied between Δ x and Δ y and the step size (hereinafter denoted by Δ step) for each anchor point:
α=Δx÷Δstep
β=Δy÷Δstep
here, α and β are step-size progression coefficients. Assuming that an included angle between Δ y and a line segment is θ, that is, an included angle between the shelf row face direction and the positive y-axis direction, the step size progressive coefficients α and β can be obtained by a trigonometric function calculation method, as shown below:
α=-sinθ
β=cosθ
of course, the above diagram is only one case of the robot motion direction, and the calculation methods of the step size progressive coefficients α and β in other cases are similar according to specific cases, and are not described here again.
S1032: and calculating the horizontal position information and the longitudinal position information of the price tag according to the first position information, the center position information, the camera calibration external parameter, the progressive step length coefficient and the first pixel density coefficient.
And the inspection robot calculates the horizontal position information and the longitudinal position information of the price tag according to the first position information, the center position information, the camera calibration external parameter, the progressive step length coefficient and the first pixel density coefficient.
The first pixel density coefficient is a proportional relation between the number of pixels on the patrol inspection image shot by the high-definition camera corresponding to each physical length in the transverse direction. Can be calculated by the following method:
the inspection robot calculates the maximum length L2 of the inspection image shot by the high-definition camera in the horizontal direction of the picture according to the distance L1 from the high-definition camera shooting the inspection image to the shelf and the internal reference transverse field angle theta fov _ h of the high-definition camera. L1 can be calculated according to the following formula:
L1=L3+L4
wherein L3 is a distance relationship between the high-definition camera and the depth camera in the chassis radius direction, and can be calculated by a rotational-translation matrix between the high-definition camera and the depth camera in the camera external parameter; l4 is the distance between the inspection robot chassis central point and the depth camera carried on the inspection robot, and belongs to the camera external parameter. Then, the maximum length L2 in the horizontal direction of the screen of the inspection image captured by the high-definition camera is L1 tan (θ fov _ h/2) × 2.
First pixel density Rl2p ═ WpThe pixel density coefficient in the transverse direction is ÷ L2, Rl2 p.
Specifically, the horizontal position information and the vertical position information may be calculated in the manners of S10321 to S10322, as shown in fig. 7, S10321 to S10322 are specifically as follows:
s10321: and determining second position information corresponding to the center point of the inspection image on the robot map according to the center position information, the camera calibration external parameter and the progressive step length coefficient.
And the inspection robot determines second position information corresponding to the center point of the inspection image on the robot map according to the center position information, the camera calibration external parameter and the progressive step length coefficient. That is, the center position information is converted into horizontal and vertical coordinates in the horizontal direction in the three-dimensional space in the patrol space.
Specifically, the inspection robot can determine the position corresponding relationship and calculate the second position information according to the position corresponding relationship. S10321 may include S103211 to S103212, as shown in fig. 8, S103211 to S103212 are specifically as follows:
s103211: determining a corresponding relation of first position information according to the center position information, the camera calibration external parameter and the progressive step length coefficient; the first position information corresponding relation comprises a corresponding relation between the central position information and the position information of the chassis central point of the inspection robot.
And the inspection robot determines the corresponding relation of the first position information according to the center position information, the camera calibration external parameter and the progressive step length coefficient. The first position information corresponding relation is the corresponding relation between the central position information and the position information of the chassis central point of the inspection robot. The position information of the chassis center point of the inspection robot is known, namely the current coordinate of the robot center point of the inspection robot. And calculating corresponding coordinate values on the robot map according to the x-axis coordinates of the two-dimensional pixels on all the inspection images on the basis of the corresponding relationship of the first position information.
Specifically, the camera calibration external parameter includes a first distance between a chassis center point of the inspection robot and the depth camera. Determining a third distance between the high-definition camera and the shelf according to the first distance and a second distance between the depth camera and the shelf; and obtaining the corresponding relation of the first position information according to the third distance, the center position information and the progressive step length coefficient. Let the coordinate of the center point C on the robot map of the overlooking angle be (x)r`,yr'x'), the coordinate of the current chassis center point of the inspection robot is (x)r,yr) And the central point of the chassis is the preset inspection photographing position which is preset. Then the coordinate (x) of point C on the robot mapr`,yr' L) can be calculated according to the distance L5 between the high definition camera and the shelf and the included angle θ r between the direction of the row surface of the shelf and the positive direction of the y axis.
And wherein:
L5=L6+L7
where L6 is the distance from the shelf of the depth camera, which is the distance from the shelf of the starting and ending points measured by the depth camera at the time of the survey. L7 is the first distance between the chassis center point of the inspection robot and the depth camera.
And thetar=θ
Thus, it is possible to provide
xr`=xr-cosθ*L5
yr`=yr–sinθ*L5
And cos θ and-sin θ are exactly the aforementioned step-size progression coefficients β and α, and thus have
xr`=xr-β*L5
yr`=yr+α*L5
Thus, the corresponding relation between the central position information and the position information of the chassis central point of the inspection robot is obtained, namely the corresponding relation of the first position information, namely the corresponding relation between the two-dimensional pixel x-axis coordinate of the central point and the chassis central point coordinate of the robot: x is the number ofc→(xr-β*L5,yr+α*L5)。
S103212: and calculating second position information corresponding to the central point on the robot map according to the corresponding relation of the first position information.
The inspection robot is based on
xr`=xr-β*L5
yr`=yr+α*L5
The coordinates (x) of the point C of the central point on the robot map of the overlooking angle can be calculated by the two formulasr`,yrI.e.,') i.e., the second location information of the center point corresponding on the robot map.
S10322: and calculating the horizontal position information and the longitudinal position information according to the second position information, the first position information and a first pixel density coefficient.
And the inspection robot calculates horizontal position information and longitudinal position information according to the second position information, the first position information and the first pixel density coefficient. The inspection robot may calculate horizontal position information and longitudinal position information through a distance between the second position information and the first position information.
Specifically, the inspection robotAnd calculating a fourth distance between the central point and the target point to be calculated according to the first pixel density coefficient. And calculating the horizontal position information and the longitudinal position information of the target point to be calculated according to the second position information, the first position information and the fourth distance. The first position information of the target point C1 point to be calculated is (x)1,y1) The x-axis coordinate on the patrol image is x1Then the pixel distance between point C1 and the center point C is xc–x1And the fourth distance L8 between the two points can be calculated by the pixel density coefficient:
L8=(xc–x1)÷Rl2p
then the coordinate (x) of point C1 on the robot map1,y1) It can be calculated as follows:
x1=xr`+α*L8
y1=yr`+β*L8。
s1033: and calculating the vertical position information of the target price tag according to the first position information, the camera calibration external parameter and the second pixel density coefficient.
And the inspection robot calculates the vertical position information of the target price tag according to the first position information, the camera calibration external parameter and the second pixel density coefficient.
The second pixel density coefficient is a proportional relation between the number of pixels on the high-definition camera picture corresponding to each physical length in the vertical direction, and can be calculated in the following way:
the inspection robot calculates the maximum length L9 of the shot picture in the vertical direction of the high-definition camera according to the angle theta fov _ v of the internal reference vertical field of view of the high-definition camera and the distance L1 from the high-definition camera to the shelf, and can calculate L1 according to the following formula:
L1=L3+L4
wherein L3 is a distance relationship between the high-definition camera and the depth camera in the chassis radius direction, and can be calculated by a rotational-translation matrix between the high-definition camera and the depth camera in the camera external parameter; l4 is the distance between the inspection robot chassis central point and the depth camera carried on the inspection robot, and belongs to the camera external parameter. Then, the maximum vertical length L9 of the image captured by the high-definition camera image is L1 tan (θ fov _ v/2) × 2.
The second pixel density Rh2p is Hp ÷ L9, where Rh2p is the second pixel density coefficient in the vertical direction, and Hp is the maximum pixel height of the patrol inspection image captured by the high-definition camera.
Specifically, after acquiring the horizontal position information and the longitudinal position information of the target point to be calculated, the fifth distance corresponding to the center point may be acquired by calculating the vertical position information of the target point to be calculated in the following manner, where the fifth distance is the vertical height of the horizontal plane corresponding to the center point, and the fifth distance is the installation height of the high definition camera, and this parameter belongs to the camera external parameter. And calculating a sixth distance between the center point of the patrol inspection image in the vertical direction and the target point to be calculated according to the second pixel density coefficient. For example, assume that the y-axis coordinate of the target point C1 to be calculated on the patrol image is y1The y-axis coordinate of the point C at the central point is ycThen the pixel distance between point C1 and the center point C is y1–ycAnd the actual physical length between two points, i.e. the sixth distance L10, can be calculated by the second pixel density coefficient Rh2p as follows:
L10=(y1–yc)÷Rh2p。
and the inspection robot calculates the vertical position information of the target price tag according to the fifth distance and the sixth distance. Vertical position information z of target point to be calculated1=zc-L10。
Before S103, the price tag may be subjected to deduplication processing, where the deduplication is based on a pixel distance between a pixel position of the price tag and a central point of a corresponding photo, and the smaller the distance, the higher the confidence level, and the price tag captured by the camera is retained.
S104: and sending the out-of-stock information and the spatial position information to a server for triggering the server to generate commodity display information.
The inspection robot can send the out-of-stock information and the spatial position information to the server for triggering the server to generate commodity display information. Meanwhile, after the inspection robot constructs the space coordinate information of all price tags, a space layout map of all commodities and corresponding price tags can be presented, and the space layout map is also sent to the server.
In the embodiment of the application, the mobile terminal moves according to the preset patrol inspection information, and when the current position at the preset patrol inspection shooting position is detected, the patrol inspection image of the shelf corresponding to the preset patrol inspection shooting position is obtained; identifying a target commodity, a price tag, a commodity code and the stock shortage information of the target commodity from the inspection image, and acquiring first position information of the price tag in the inspection image; determining spatial position information of the price tag in the display space according to a preset position information conversion strategy and the first position information; and sending the out-of-stock information and the spatial position information to a server for triggering the server to generate commodity display information. According to the method, the related information is acquired through the polling of the polling robot, the space coordinate information of each goods shelf and the price tag is generated, the intelligent and digital requirements of goods shelf shortage detection are met, the technical scheme is simple in deployment, rapid in calculation, high in accuracy and good in applicability, and the digital modeling requirements of all traditional goods shelves can be well met.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Referring to fig. 9, fig. 9 is a schematic view of an inspection robot according to a second embodiment of the present disclosure. The included units are used for executing steps in the embodiments corresponding to fig. 1, fig. 3, and fig. 7 to fig. 8, and refer to the related descriptions in the embodiments corresponding to fig. 1, fig. 3, and fig. 7 to fig. 8. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 9, the inspection robot 9 includes:
the first processing unit 910 is configured to move according to preset inspection information, and when detecting that the current shelf is located at a preset inspection shooting position, obtain an inspection image of the shelf corresponding to the preset inspection shooting position;
a second processing unit 920, configured to identify a target product, a price tag and a product code thereof, and stock shortage information of the target product from the inspection image, and acquire first position information of the price tag in the inspection image;
a third processing unit 930, configured to determine spatial location information of the price tag in the display space according to a preset location information conversion policy and the first location information;
and a sending unit 940, configured to send the out-of-stock information and the spatial location information to a server, and to trigger the server to generate commodity display information.
Further, the spatial position information includes horizontal position information, longitudinal position information, and vertical position information;
the third processing unit 930 is specifically configured to:
acquiring camera calibration external parameter, center position information and progressive step length coefficient of the inspection robot; the center position information is position information of a center point, and the center point is a point projected in the inspection image by a camera center point of a camera for shooting the inspection image;
calculating horizontal position information and longitudinal position information of the price tag according to the first position information, the center position information, the camera calibration external parameter, the progressive step length coefficient and a first pixel density coefficient;
and calculating the vertical position information of the price tag according to the first position information, the camera calibration external parameter and the second pixel density coefficient.
The third processing unit 930 is specifically configured to:
determining second position information corresponding to the central point on the robot map according to the central position information, the camera calibration external parameter and the progressive step length coefficient;
and calculating the horizontal position information and the longitudinal position information according to the second position information, the first position information and a first pixel density coefficient.
The third processing unit 930 is specifically configured to:
according to the center position information, the camera calibration external parameter and the progressive step length coefficient, a first position information corresponding relation is obtained; the first position information corresponding relation comprises a corresponding relation between the central position information and the position information of the chassis central point of the inspection robot;
and calculating second position information corresponding to the central point on the robot map according to the corresponding relation of the first position information.
Further, the camera calibration external parameter comprises a first distance between a chassis center point of the inspection robot and the depth camera;
the third processing unit 930 is specifically configured to:
determining a third distance between the high-definition camera and the shelf according to the first distance and a second distance between the depth camera and the shelf;
and obtaining a corresponding relation of the first position information according to the third distance, the center position information and the progressive step length coefficient.
The third processing unit 930 is specifically configured to:
calculating a fourth distance between the center point of the inspection image and a target point to be calculated according to the first pixel density coefficient;
and calculating the horizontal position information and the longitudinal position information of the target point to be calculated according to the second position information, the first position information and the fourth distance.
The third processing unit 930 is specifically configured to:
acquiring a fifth distance corresponding to the central point; the fifth distance is the vertical height of the horizontal plane corresponding to the central point;
calculating a sixth distance between the central point and a target point to be calculated in the vertical direction according to the second pixel density coefficient;
and calculating to obtain the vertical position information of the target price tag according to the fifth distance and the sixth distance.
Further, the inspection robot 9 further includes:
and the fourth processing unit is used for carrying out duplication elimination processing on the price tag.
Fig. 10 is a schematic view of an inspection robot according to a third embodiment of the present application. As shown in fig. 10, the inspection robot 10 of this embodiment includes: a processor 100, a memory 101, and a computer program 102, such as a merchandise display inspection program, stored in the memory 101 and operable on the processor 100. The processor 100 executes the computer program 102 to perform steps of an embodiment of a merchandise display inspection method, such as steps 101 to 104 shown in fig. 1. Alternatively, the processor 100, when executing the computer program 102, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 910 to 940 shown in fig. 9.
Illustratively, the computer program 102 may be partitioned into one or more modules/units that are stored in the memory 101 and executed by the processor 100 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 102 in the inspection robot 10. For example, the computer program 102 may be divided into a first processing unit, a second processing unit, a third processing unit, and a sending unit, and the specific functions of each unit are as follows:
the first processing unit is used for moving according to preset polling information and acquiring a polling image of the goods shelf corresponding to a preset polling shooting position when the current position is detected to be at the preset polling shooting position;
the second processing unit is used for identifying a target commodity, a price tag and a commodity code thereof and the stock shortage information of the target commodity from the patrol image and acquiring first position information of the price tag in the patrol image;
the third processing unit is used for determining the spatial position information of the price tag in the display space according to a preset position information conversion strategy and the first position information;
and the sending unit is used for sending the out-of-stock information and the spatial position information to a server and triggering the server to generate commodity display information.
The inspection robot may include, but is not limited to, a processor 100, a memory 101, a high definition camera, a depth camera. Those skilled in the art will appreciate that fig. 10 is merely an example of the inspection robot 10 and does not constitute a limitation of the inspection robot 10 and may include more or fewer components than shown, or some components in combination, or different components, for example, the inspection robot may also include input and output devices, network access devices, buses, etc.
The Processor 100 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 101 may be an internal storage unit of the inspection robot 10, such as a hard disk or a memory of the inspection robot 10. The memory 101 may also be an external storage device of the inspection robot 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the inspection robot 10. Further, the memory 101 may also include both an internal storage unit and an external storage device of the inspection robot 10. The memory 101 is used to store the computer program and other programs and data required by the inspection robot. The memory 101 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (9)

1. The commodity display inspection method is characterized by being applied to an inspection robot and comprising the following steps:
moving according to preset polling information, and acquiring a polling image of a shelf corresponding to a preset polling shooting position when detecting that the shelf is located at the preset polling shooting position currently;
identifying a target commodity, a price tag and a commodity code thereof and the stock shortage information of the target commodity from the inspection image, and acquiring first position information of the price tag in the inspection image;
determining spatial position information of the price tag in the display space according to a preset position information conversion strategy and the first position information;
sending the out-of-stock information and the spatial position information to a server for triggering the server to generate commodity display information;
the spatial position information comprises horizontal position information, longitudinal position information and vertical position information;
the determining the spatial position information of the price tag in the display space according to the preset position information conversion strategy and the first position information comprises,
acquiring camera calibration external parameter, center position information and progressive step length coefficient of the inspection robot; the center position information is position information of a center point, and the center point is a point projected in the inspection image by a camera center point of a camera for shooting the inspection image;
calculating horizontal position information and longitudinal position information of the price tag according to the first position information, the center position information, the camera calibration external parameter, the progressive step length coefficient and a first pixel density coefficient;
and calculating the vertical position information of the price tag according to the first position information, the camera calibration external parameter and the second pixel density coefficient.
2. The merchandise display inspection method according to claim 1, wherein the calculating horizontal position information and longitudinal position information of the price tag according to the first position information, the center position information, the camera calibration external parameter, the progressive step size coefficient and the first pixel density coefficient comprises:
determining second position information corresponding to the central point on the robot map according to the central position information, the camera calibration external parameter and the progressive step length coefficient;
and calculating the horizontal position information and the longitudinal position information according to the second position information, the first position information and a first pixel density coefficient.
3. The merchandise display inspection method according to claim 2, wherein the determining of the second position information of the center point on the robot map corresponding to the center point according to the center position information, the camera calibration external reference parameter and the progressive step size coefficient comprises:
according to the center position information, the camera calibration external parameter and the progressive step length coefficient, a first position information corresponding relation is obtained; the first position information corresponding relation comprises a corresponding relation between the central position information and the position information of the chassis central point of the inspection robot;
and calculating second position information corresponding to the central point on the robot map according to the corresponding relation of the first position information.
4. The merchandise display inspection method according to claim 3, wherein the camera calibration external parameter includes a first distance between a chassis center point of the inspection robot and the depth camera;
the corresponding relation of the first position information according to the center position information, the camera calibration external parameter and the progressive step length coefficient comprises the following steps:
determining a third distance between the high-definition camera and the shelf according to the first distance and a second distance between the depth camera and the shelf;
and obtaining a corresponding relation of the first position information according to the third distance, the center position information and the progressive step length coefficient.
5. The merchandise display inspection method according to claim 2, wherein the calculating the horizontal position information and the longitudinal position information based on the second position information, the first position information, and a first pixel density coefficient includes:
calculating a fourth distance between the center point of the inspection image and a target point to be calculated according to the first pixel density coefficient;
and calculating the horizontal position information and the longitudinal position information of the target point to be calculated according to the second position information, the first position information and the fourth distance.
6. The merchandise display inspection method according to claim 1, wherein the calculating of the vertical position information of the price tag according to the first position information, the camera calibration external reference parameter and the second pixel density coefficient comprises:
acquiring a fifth distance corresponding to the central point; the fifth distance is the vertical height of the horizontal plane corresponding to the central point;
calculating a sixth distance between the central point and a target point to be calculated in the vertical direction according to the second pixel density coefficient;
and calculating to obtain the vertical position information of the price tag according to the fifth distance and the sixth distance.
7. The merchandise display inspection method according to claim 1, before the determining spatial location information of the price tag in the display space according to a preset location information conversion policy and the first location information, further comprising:
and carrying out duplication elimination processing on the price tag.
8. An inspection robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 7 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202011241285.XA 2020-11-09 2020-11-09 Commodity display inspection method, inspection robot and storage medium Active CN112489240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011241285.XA CN112489240B (en) 2020-11-09 2020-11-09 Commodity display inspection method, inspection robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011241285.XA CN112489240B (en) 2020-11-09 2020-11-09 Commodity display inspection method, inspection robot and storage medium

Publications (2)

Publication Number Publication Date
CN112489240A CN112489240A (en) 2021-03-12
CN112489240B true CN112489240B (en) 2021-08-13

Family

ID=74928871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011241285.XA Active CN112489240B (en) 2020-11-09 2020-11-09 Commodity display inspection method, inspection robot and storage medium

Country Status (1)

Country Link
CN (1) CN112489240B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110351678A (en) * 2018-04-03 2019-10-18 浙江汉朔电子科技有限公司 Commodity attribute method and device, equipment and storage medium
CN111274845A (en) * 2018-12-04 2020-06-12 杭州海康威视数字技术股份有限公司 Method, device and system for identifying shelf display situation of store and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930264B (en) * 2012-09-29 2015-10-28 李炳华 Based on commodity display information acquisition and analysis system and the method for image recognition technology
US20180260772A1 (en) * 2017-01-31 2018-09-13 Focal Systems, Inc Out-of-stock detection based on images
CN106875451B (en) * 2017-02-27 2020-09-08 安徽华米智能科技有限公司 Camera calibration method and device and electronic equipment
CN109977886B (en) * 2019-03-29 2021-03-09 京东方科技集团股份有限公司 Shelf vacancy rate calculation method and device, electronic equipment and storage medium
CN111739087B (en) * 2020-06-24 2022-11-18 苏宁云计算有限公司 Method and system for generating scene mask

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110351678A (en) * 2018-04-03 2019-10-18 浙江汉朔电子科技有限公司 Commodity attribute method and device, equipment and storage medium
CN111274845A (en) * 2018-12-04 2020-06-12 杭州海康威视数字技术股份有限公司 Method, device and system for identifying shelf display situation of store and electronic equipment

Also Published As

Publication number Publication date
CN112489240A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
Shao et al. Computer vision based target-free 3D vibration displacement measurement of structures
JP3977776B2 (en) Stereo calibration device and stereo image monitoring device using the same
CN112950667B (en) Video labeling method, device, equipment and computer readable storage medium
JP2000517452A (en) Viewing method
US20150302611A1 (en) Vehicle dimension estimation from vehicle images
CN110260857A (en) Calibration method, device and the storage medium of vision map
CN115035162A (en) Monitoring video personnel positioning and tracking method and system based on visual slam
CN113256731A (en) Target detection method and device based on monocular vision
CN110728649A (en) Method and apparatus for generating location information
CN113034586A (en) Road inclination angle detection method and detection system
CN112435223A (en) Target detection method, device and storage medium
CN112378333A (en) Method and device for measuring warehoused goods
US11544839B2 (en) System, apparatus and method for facilitating inspection of a target object
JP5878634B2 (en) Feature extraction method, program, and system
CN111862208B (en) Vehicle positioning method, device and server based on screen optical communication
CN111753858A (en) Point cloud matching method and device and repositioning system
CN112381873A (en) Data labeling method and device
EP4250245A1 (en) System and method for determining a viewpoint of a traffic camera
CN112489240B (en) Commodity display inspection method, inspection robot and storage medium
US9135715B1 (en) Local feature cameras for structure from motion (SFM) problems with generalized cameras
CN111951328A (en) Object position detection method, device, equipment and storage medium
JP2007200364A (en) Stereo calibration apparatus and stereo image monitoring apparatus using the same
CN105869413A (en) Method for measuring traffic flow and speed based on camera video
Neves et al. A calibration algorithm for multi-camera visual surveillance systems based on single-view metrology
CN113808186B (en) Training data generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant