CN117636373A - Electronic price tag detection method and device - Google Patents

Electronic price tag detection method and device Download PDF

Info

Publication number
CN117636373A
CN117636373A CN202311330806.2A CN202311330806A CN117636373A CN 117636373 A CN117636373 A CN 117636373A CN 202311330806 A CN202311330806 A CN 202311330806A CN 117636373 A CN117636373 A CN 117636373A
Authority
CN
China
Prior art keywords
price tag
electronic price
image
shelf
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311330806.2A
Other languages
Chinese (zh)
Inventor
张晶
许金亚
李汪佩
杨帅
丁坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hanshi Information Technology Co ltd
Original Assignee
Shanghai Hanshi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hanshi Information Technology Co ltd filed Critical Shanghai Hanshi Information Technology Co ltd
Priority to CN202311330806.2A priority Critical patent/CN117636373A/en
Publication of CN117636373A publication Critical patent/CN117636373A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for detecting an electronic price tag, wherein the method comprises the following steps: acquiring shelf channel inspection data acquired by a robot; the robot acquires inspection data of the goods shelf channels based on different goods shelf channels in a preset target site map; electronic price tag image recognition is carried out on the goods shelf channel inspection data, and a plurality of electronic price tag images are obtained; inputting each electronic price tag image into an electronic price tag identification neural network model to obtain a detection result of each electronic price tag image; according to the robot pose information of the shelf image corresponding to each electronic price tag image, establishing a space layout image of the electronic price tag; and displaying the detection result of each electronic price tag image in the space layout image of the price tag. The invention is used for improving the efficiency and the precision of the detection of the electronic price tag and reducing the detection cost of the electronic price tag.

Description

Electronic price tag detection method and device
Technical Field
The invention relates to the technical field of image processing, in particular to an electronic price tag detection method and device.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
With the development of the internet of things, the optimization of labor cost and popularization of low-carbon environment-friendly concepts, electronic price tags are increasingly used in various off-line retail stores, storehouses and other places. However, as the service time increases, the electronic price tag may not be displayed normally due to insufficient battery power or damaged electronic ink screen. Once this occurs, it is highly likely that a customer using an electronic price tag will be lost.
Therefore, how to timely find out abnormal electronic price tags which cannot be displayed normally is a significant problem for shops using the electronic price tags and suppliers of the electronic price tags.
In the current stage, the price tags are manually checked by walking among shelves, and abnormal price tag conditions are counted and reported. However, for large business projects, manual inspection requires high labor and time costs. And the manual inspection inevitably has a larger miss rate due to manual fatigue or negligence.
Disclosure of Invention
The embodiment of the invention provides an electronic price tag detection method, which is used for improving the efficiency and the precision of electronic price tag detection and reducing the detection cost of the electronic price tag, and comprises the following steps:
Acquiring shelf channel inspection data acquired by a robot; the robot acquires inspection data of the goods shelf channels based on different goods shelf channels in a preset target site map; the inspection data comprise shelf images and robot pose information when the shelf images are collected;
electronic price tag image recognition is carried out on the goods shelf channel inspection data, and a plurality of electronic price tag images are obtained;
inputting each electronic price tag image into an electronic price tag identification neural network model to obtain a detection result of each electronic price tag image; the electronic price tag recognition neural network model is obtained by training the deep learning neural network model by taking a historical electronic price tag detection data set as training data; the historical electronic price tag detection dataset comprises: different historical electronic price tag images and corresponding detection results;
according to the robot pose information of the shelf image corresponding to each electronic price tag image, establishing a space layout image of the electronic price tag; the spatial layout image of the electronic price tag is used for displaying the spatial coordinate information of each electronic price tag image constructed by the pose information of the robot;
and displaying the detection result of each electronic price tag image in the space layout image of the price tag.
The embodiment of the invention also provides an electronic price tag detection device, which is used for improving the efficiency and the precision of electronic price tag detection and reducing the detection cost of the electronic price tag, and comprises the following components:
the data acquisition module is used for acquiring the goods shelf channel inspection data acquired by the robot; the robot acquires inspection data of the goods shelf channels based on different goods shelf channels in a preset target site map; the inspection data comprise shelf images and robot pose information when the shelf images are collected;
the electronic price tag image recognition module is used for carrying out electronic price tag image recognition on the goods shelf channel inspection data to obtain a plurality of electronic price tag images;
the electronic price tag image detection module is used for inputting each electronic price tag image into the electronic price tag identification neural network model to obtain a detection result of each electronic price tag image; the electronic price tag recognition neural network model is obtained by training the deep learning neural network model by taking a historical electronic price tag detection data set as training data; the historical electronic price tag detection dataset comprises: different historical electronic price tag images and corresponding detection results;
the space layout image establishing module is used for establishing a space layout image of each electronic price tag according to the robot pose information of the shelf image corresponding to each electronic price tag image; the spatial layout image of the electronic price tag is used for displaying the spatial coordinate information of each electronic price tag image constructed by the pose information of the robot;
The detection result display module is used for displaying the detection result of each electronic price tag image in the space layout image of the price tag.
The embodiment of the invention also provides computer equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the electronic price tag detection method is realized when the processor executes the computer program.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the electronic price tag detection method is realized when the computer program is executed by a processor.
The embodiment of the invention also provides a computer program product, which comprises a computer program, and the computer program realizes the electronic price tag detection method when being executed by a processor.
In the embodiment of the invention, the goods shelf channel inspection data collected by the robot is obtained; the robot acquires inspection data of the goods shelf channels based on different goods shelf channels in a preset target site map; the inspection data comprise shelf images and robot pose information when the shelf images are collected; electronic price tag image recognition is carried out on the goods shelf channel inspection data, and a plurality of electronic price tag images are obtained; inputting each electronic price tag image into an electronic price tag identification neural network model to obtain a detection result of each electronic price tag image; the electronic price tag recognition neural network model is obtained by training the deep learning neural network model by taking a historical electronic price tag detection data set as training data; the historical electronic price tag detection dataset comprises: different historical electronic price tag images and corresponding detection results; according to the robot pose information of the shelf image corresponding to each electronic price tag image, establishing a space layout image of the electronic price tag; the spatial layout image of the electronic price tag is used for displaying the spatial coordinate information of each electronic price tag image constructed by the pose information of the robot; in the space layout image of the price tag, the detection result of each electronic price tag image is displayed, compared with the technical scheme of manually inspecting the electronic price tag in the prior art, the electronic price tag image is identified from inspection data acquired by a robot, and the electronic price tag is automatically detected by means of an electronic price tag identification neural network model, so that the efficiency and the precision of electronic price tag detection are improved, the problem of error detection caused by manual inspection in the prior art is avoided, the economic loss to customers using the electronic price tag due to abnormal display of the price tag is reduced, and the detection cost of the electronic price tag is also reduced.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. In the drawings:
FIG. 1 is a flow chart of an electronic price tag detection method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a target site map according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a specific example of location information of a pallet track in a map of a target site according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an exemplary inspection robot according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an electronic price tag detection method according to an embodiment of the present invention;
FIG. 6 is a detailed schematic diagram of a coordinate system conversion and fusion process for a price tag location map in an embodiment of the present invention;
FIG. 7 is a diagram showing a specific example of robot coordinates in a map of a target site according to an embodiment of the present invention;
FIG. 8 is a diagram showing a top view of a inspection robot according to an embodiment of the present invention;
FIG. 9 is a diagram showing a projection view of a high-definition camera shooting shelf according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a computer device for electronic price tag detection in an embodiment of the present invention;
FIG. 11 is a view showing another embodiment of a projection view of a shelf taken by a high-definition camera according to an embodiment of the present invention;
fig. 12 is a specific example diagram of a front view of a shot image of a high-definition camera according to an embodiment of the present invention;
FIG. 13 is a diagram illustrating a specific example of a marker pallet approach in a map of a target site according to an embodiment of the present invention;
FIG. 14 is a diagram showing a detailed example of a spatial layout of price tags after spatial coordinate fusion according to an embodiment of the present invention;
FIG. 15 is a diagram showing another embodiment of a price tag spatial layout after spatial coordinate fusion according to an embodiment of the present invention;
FIG. 16 is a diagram illustrating a process flow of maintaining and managing a root price tag according to an embodiment of the present invention;
fig. 17 is a schematic structural diagram of an electronic price tag detection device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings. The exemplary embodiments of the present invention and their descriptions herein are for the purpose of explaining the present invention, but are not to be construed as limiting the invention.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
In the description of the present specification, the terms "comprising," "including," "having," "containing," and the like are open-ended terms, meaning including, but not limited to. Reference to the terms "one embodiment," "a particular embodiment," "some embodiments," "for example," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. The sequence of steps involved in the embodiments is used to schematically illustrate the practice of the present application, and is not limited thereto and may be appropriately adjusted as desired.
The data acquisition, storage, use, processing and the like in the technical scheme meet the relevant regulations of national laws and regulations.
With the development of the internet of things, the optimization of labor cost and popularization of low-carbon environment-friendly concepts, electronic price tags are increasingly used in various off-line retail stores, storehouses and other places. However, as the service time increases, the electronic price tag may not be displayed normally due to insufficient battery power or damaged electronic ink screen. Once this occurs, it is highly likely that a customer using an electronic price tag will be lost.
Therefore, how to timely find out abnormal electronic price tags which cannot be displayed normally is a significant problem for shops using the electronic price tags and suppliers of the electronic price tags.
In the current stage, the price tags are manually checked by walking among shelves, and abnormal price tag conditions are counted and reported. However, for large business projects, manual inspection requires high labor and time costs. And the manual inspection inevitably has a larger miss rate due to manual fatigue or negligence.
In order to solve the above problems, an embodiment of the present invention provides an electronic price tag detection method, which is used to improve efficiency and accuracy of electronic price tag detection and reduce detection cost of electronic price tags, and referring to fig. 1, the method may include:
Step 101: acquiring shelf channel inspection data acquired by a robot; the robot acquires inspection data of the goods shelf channels based on different goods shelf channels in a preset target site map; the inspection data comprise shelf images and robot pose information when the shelf images are collected;
step 102: electronic price tag image recognition is carried out on the goods shelf channel inspection data, and a plurality of electronic price tag images are obtained;
step 103: inputting each electronic price tag image into an electronic price tag identification neural network model to obtain a detection result of each electronic price tag image; the electronic price tag recognition neural network model is obtained by training the deep learning neural network model by taking a historical electronic price tag detection data set as training data; the historical electronic price tag detection dataset comprises: different historical electronic price tag images and corresponding detection results;
step 104: according to the robot pose information of the shelf image corresponding to each electronic price tag image, establishing a space layout image of the electronic price tag; the spatial layout image of the electronic price tag is used for displaying the spatial coordinate information of each electronic price tag image constructed by the pose information of the robot;
Step 105: and displaying the detection result of each electronic price tag image in the space layout image of the price tag.
In the embodiment of the invention, the goods shelf channel inspection data collected by the robot is obtained; the robot acquires inspection data of the goods shelf channels based on different goods shelf channels in a preset target site map; the inspection data comprise shelf images and robot pose information when the shelf images are collected; electronic price tag image recognition is carried out on the goods shelf channel inspection data, and a plurality of electronic price tag images are obtained; inputting each electronic price tag image into an electronic price tag identification neural network model to obtain a detection result of each electronic price tag image; the electronic price tag recognition neural network model is obtained by training the deep learning neural network model by taking a historical electronic price tag detection data set as training data; the historical electronic price tag detection dataset comprises: different historical electronic price tag images and corresponding detection results; according to the robot pose information of the shelf image corresponding to each electronic price tag image, establishing a space layout image of the electronic price tag; the spatial layout image of the electronic price tag is used for displaying the spatial coordinate information of each electronic price tag image constructed by the pose information of the robot; in the space layout image of the price tag, the detection result of each electronic price tag image is displayed, compared with the technical scheme of manually inspecting the electronic price tag in the prior art, the electronic price tag image is identified from inspection data acquired by a robot, and the electronic price tag is automatically detected by means of an electronic price tag identification neural network model, so that the efficiency and the precision of electronic price tag detection are improved, the problem of error detection caused by manual inspection in the prior art is avoided, the economic loss to customers using the electronic price tag due to abnormal display of the price tag is reduced, and the detection cost of the electronic price tag is also reduced.
In specific implementation, first, the inspection data of a goods shelf channel collected by a robot are obtained; the robot acquires inspection data of the goods shelf channels based on different goods shelf channels in a preset target site map; the inspection data comprise shelf images and robot pose information when the shelf images are collected.
In one embodiment, the robot is for:
based on the position moving instruction, carrying out laser scanning on a goods shelf in the target site map by using the carried laser radar, and generating a laser scanning image of the target site map;
and carrying out inspection on the goods shelf channel based on the goods shelf channel marked in the laser scanning image to obtain the inspection data of the goods shelf channel.
In the specific implementation, after the goods shelf channel inspection data acquired by the robot are acquired, electronic price tag image recognition is carried out on the goods shelf channel inspection data, so that a plurality of electronic price tag images are obtained.
In an embodiment, performing electronic price tag image recognition on the shelf channel inspection data to obtain a plurality of electronic price tag images, including:
based on a price tag extrusion strip depth algorithm model, carrying out electronic price tag image identification on the shelf channel inspection data to obtain different shelf extrusion strip images and corresponding extrusion strip position information in shelf images;
And based on a visual AI algorithm, carrying out image cutting and splicing on the shelf extrusion bar images to obtain a plurality of electronic price tag images.
In one embodiment, the robot pose information includes robot position coordinates and a robot orientation angle.
In the specific implementation, after electronic price tag image recognition is carried out on the goods shelf channel inspection data to obtain a plurality of electronic price tag images, each electronic price tag image is input into an electronic price tag recognition neural network model to obtain a detection result of each electronic price tag image; the electronic price tag recognition neural network model is obtained by training the deep learning neural network model by taking a historical electronic price tag detection data set as training data; the historical electronic price tag detection dataset comprises: different historical electronic price tag images and corresponding detection results.
In the implementation, after each electronic price tag image is input into an electronic price tag identification neural network model to obtain a detection result of each electronic price tag image, a space layout image of the electronic price tag is established according to the robot pose information of the shelf image corresponding to each electronic price tag image; the spatial layout image of the electronic price tag is used for displaying the spatial coordinate information of each electronic price tag image constructed by the pose information of the robot.
In one embodiment, the creating a spatial layout image of each electronic price tag according to the robot pose information of the shelf image corresponding to each electronic price tag image includes:
according to the robot pose information of the shelf image corresponding to each electronic price tag image and the internal and external parameters of the camera, calculating the three-dimensional space position coordinates of each electronic price tag image; the three-dimensional space position coordinates of the electronic price tag image are used for reflecting the actual position of the electronic price tag in the space layout image;
and combining the three-dimensional space position coordinates of each electronic price tag image to establish a space layout image of the electronic price tag.
In one embodiment, calculating the three-dimensional spatial position coordinates of each electronic price tag image according to the robot pose information of the shelf image corresponding to each electronic price tag image and the internal and external parameters of the camera includes:
according to actual coordinate information of a robot in a target site map and a transverse fov angle in the robot pose information of the shelf image corresponding to each electronic price tag image;
according to the actual coordinates of the machine in the target site map, the transverse pixel width and the transverse fov angle in the internal and external parameters of the camera in the robot pose information of the shelf image corresponding to each electronic price tag image, calculating the transverse component in the actual space coordinates of each electronic price tag;
And calculating the height component in the actual space coordinate of each electronic price according to the actual coordinate of the machine in the target site map, the transverse pixel width and the transverse fov angle in the internal and external parameters of the camera in the robot pose information of the shelf image corresponding to each electronic price.
In the implementation, after the space layout image of each electronic price tag is established according to the robot pose information of the shelf image corresponding to each electronic price tag image, the detection result of each electronic price tag image is displayed in the space layout image of the price tag.
In one embodiment, further comprising:
and alarming the electronic price tag with unclear electronic price tag image represented by the detection result of the electronic price tag image in the space layout image of the price tag.
A specific example is given below to illustrate a specific application of the method of the invention, which may include the steps of:
the acquisition of inspection data in this embodiment relies on the use of autonomous navigational walking wheeled robots. A plurality of high-definition cameras can be installed on the robot, and the view angles of the cameras can cover the whole shelf height for shooting. In the shot photos, the robot detects price tags and the positions of the price tags on the goods shelves through a deep learning algorithm. And then, comparing the characteristic template of the abnormal price tag with the extracted characteristic of the price tag detected in the image to find the price tag displayed abnormally. And fusing price tag detection results through the coordinate positions of the map coordinate system corresponding to the price tags, and reporting the detected abnormal price tag positions to a cloud background of the store management system.
The scheme related to the embodiment is realized by the steps of robot deployment configuration, inspection photographing, price tag detection and anomaly identification algorithm, price tag position map coordinate system conversion and fusion processing, and reporting of all price tag position and anomaly price tag alarm information. The following is a specific description:
1. description of robot deployment configuration:
the industrial survey deployment configuration includes two main steps: scanning and drawing and marking a patrol target shelf:
(1) In the first step of construction and investigation, a robot is required to scan and construct a map (shown in fig. 2) for the environment with super business, namely, the robot is used for carrying out inspection data acquisition on a goods shelf channel based on different goods shelf channels in a preset target site map.
The robot provides a mapping tool for a deployer, and the mapping tool can be used for operating the robot to move forward and backward and rotate at an angle. The deployment personnel operate the robot to move the robot in the whole field in the deployed field from a certain point, the robot can scan obstacles in the field (all places where laser cannot pass are marked as obstacle points on the map, and the scanning distance of the laser is 20 m) through the forward laser radar, and the map is continuously expanded and updated. When the robot is controlled to walk through the field, a map of the whole field is generated, and reference is made to fig. 2.
The grid map generated by the robot is equivalent to a bitmap, the numerical value of each pixel point is the probability of being unable to pass, and the higher the numerical value is, the lower the probability that the robot can pass is. The map and the actual physical size of the pixel point and the field have a fixed corresponding relation, and each pixel point corresponds to the physical actual size of 5 cm.
The coordinate system of the robot grid map is fixed with the fixed point in the upper left corner as the (0, 0) coordinate point. The right-right direction of the image is oriented at 0 °, and the counterclockwise direction angle increases to 360 °. When the map is expanded, the origin of coordinates (0, 0) will also be updated to the upper left corner vertex position of the map as the map expands, as shown in fig. 13, which is an exemplary map of the target site map scanned by the robot, where the results of the scan of the pallet are also identified, as 1-7 in fig. 13.
(2) Marking inspection goods shelf on map
The starting point and the end point of the robot in each shelf channel can be marked based on the built map (the starting point and the end point of the robot are the left end point and the right end point of a whole shelf, the whole shelf can be 1 shelf, or a long shelf formed by arranging a plurality of single shelves side by side, the moving line of the robot moves in the channel facing the long shelf, so the robot is called a shelf channel), and a specific robot performs a route for collecting inspection data of the shelf channel, and the method can be set according to 3.
Specifically, after labeling the left and right end points of each channel, it is necessary to set a required scanning distance (not less than 50 cm) and a left and right relative position of the shelf in the starting direction (see fig. 3). The robot calculates a photographing start point and a photographing end point position according to the map coordinates of each end point, the distance and the movement direction information, and records the photographing start point and the photographing end point position in the robot local area. And reading the information to carry out inspection on the goods shelves needing inspection when the robot actually inspects.
2. Robot inspection photographing introduction
When the robot is patrolled and examined, the robot can linearly move along the starting point to the end point of the goods shelf according to the set distance on the goods shelf aisle, and the movement mode can be shown as shown in fig. 4.
The left side of the advancing direction of the robot is the left side of the shelf A, so the left and right relative positions required to be set for deploying the shelf A in the case of engineering investigation are left; since the rack B is on the right side in the forward direction of the robot, the relative position of the rack B to the left and right, which is required to be set at the time of work investigation, is right.
Specifically, the robot inspection photographing workflow may be as shown in fig. 5.
As an example, the following steps are performed by the robot end:
(1) Inquiring the inspection configuration, and taking out the starting point position information of the first inspection channel;
( 2) When the movement reaches the specified start position of the channel, the movement starts to move linearly toward the end point at a constant speed (optimum speed: 20cm/s )
(3) Shooting of a plurality of high-definition cameras is started, and the cameras are simultaneously exposed at the same time through hardware synchronizing signals. And at the same time, the time stamp of the exposure starting time of the camera can be obtained. The chassis of the robot can continuously obtain the pose information of the chassis at the frequency of 100 Hz. According to the timestamp of the photographing exposure time, the closest robot pose can be found by matching with the timestamp corresponding to the chassis pose, and the pose is stored together with the photo as the robot pose data (the pose data comprises three data, namely the x, y plane coordinates and the orientation angle on the robot map) corresponding to the photo.
(4) After the photo is generated, the photo is sent to an AI algorithm to identify that the pixel coordinate positions of the price tags and commodity arrangement surfaces in the photo are stored as data files corresponding to the photo for a post-processing program to use.
(5) When an obstacle is encountered in the movement process of the scanning channel, the robot stops moving and shoots to wait for the obstacle to leave and sets timeout waiting time (the timeout time can be set according to the specific scene condition, and the reasonable time is 3 minutes). When the obstacle leaves within the timeout period, the robot resumes the linear motion towards the end point and continues photographing. If the timeout obstacle still exists, the scanning of the channel is abandoned, and the inspection operation of the next channel is executed.
(6) After one channel is scanned, the starting point bit information of the next channel to be inspected is taken out to start navigation.
(7) And after all the channels are scanned, the robot returns to the charging pile.
3. Flow of price tag detection and abnormal price tag identification algorithm reasoning
Firstly, an algorithm detects all extrusion bar positions of a goods shelf on a picture of each camera by using a price tag extrusion bar depth algorithm model, cuts and splices the detected price tag extrusion bars, and inputs a splice diagram into a price tag template matching algorithm to detect whether abnormal price tags exist or not. The algorithm stores the detection result into the price tag information file for post-processing reading processing.
4. The conversion and fusion process of the price tag position map coordinate system, as shown in fig. 6, may include the following steps:
1. conversion processing of a price tag coordinate system;
2. map coordinate position information of the left end point and the right end point of the channel is read, and calibration external parameter data of the camera are read; the external parameter calibration of the center position of each camera and the robot is obtained by calibrating in the assembly production link of the robot;
3. reading the reasoning result of the price tag on the goods shelf and the goods arrangement surface pixel position stored by the AI algorithm;
4. calculating x, y coordinate values of the bidding price and commodity corresponding to a robot map coordinate system according to an included angle of an image pixel x coordinate (vertex coordinates of an upper left corner and a lower right corner of the price) of the price and commodity row surface relative to a camera center pixel and a map pose corresponding to the camera center position;
5. Calculating the actual physical heights of the bidding sticks and the commodities at a z coordinate according to the y coordinate of the image pixel and the camera assembly physical height in the external parameters;
6. reporting space coordinate (x, y, z) information converted by all price tags and commodities of the goods shelf to the cloud background
The following explains a specific method for converting the pixel coordinates of the image into the spatial coordinates, which are mentioned in the flowchart:
in fig. 7, a schematic view of a robot moving straight from a start point along a pallet path to an end point and taking a photograph is shown. In the figure, the circle center O is the position of the central point of the robot, namely the position of the coordinate point of the navigation of the robot based on the map. The coordinates of the starting point are (Xs, ys, rs), the pose of the robot corresponding to the first photographing moment is (X1, Y1, R1), and the like. x and y are coordinate values on the x-axis and y-axis on the robot map coordinate system, respectively, and r is an angle value directed directly in front of the robot.
The line segment M is the distance between the center point of each photographing point robot and the shelf. The map coordinate positions of the left and right end points of the shelf are obtained in the engineering investigation step, so that the plane geometric expression of the straight line segment of the shelf in the map coordinate system can be determined by the two coordinate points:
(y-YL)/(YR-YL)=(x-XL)/(XR-XL)
Where, (XL, YL) is the physical coordinate of the left end of the shelf channel on the map and (XR, YR) is the physical coordinate of the right end of the shelf channel on the map.
The vertical distance between the center position O of each photographing point robot and the straight line, namely the length of M, can be calculated through a straight line equation and the pose of each photographing point. And then calculating the vertical distance between the center point of the camera and the shot goods shelf according to the external reference relation of the camera relative to the center position of the robot. The distance between the camera and the robot center point is shown by the physical length of line segment N in fig. 8, which is one of the external parameters mentioned above, called Ld2r.
If the camera system of the robot is equipped with a depth camera, the distance between the depth camera and the shelf can be obtained directly by the depth camera. The rotational translation matrix (RT matrix) is then transformed according to the spatial relative positional relationship between the depth camera and all other high definition cameras, i.e. the rigid body between two-to-two cameras. The actual distance between the high definition camera and the shelf can be calculated. If the robot shooting system is not provided with a depth camera, the actual distance between the center point of the robot and the goods shelf can be obtained by calculating the distance between the map coordinates of the shooting position of the robot and the feet of the robot on the goods shelf line segment according to the plane geometric knowledge, and then subtracting the distance between the high-definition camera and the center point of the robot, so that the actual distance between the high-definition camera and the goods shelf can be obtained.
For the pixel coordinates of the pixel points in each photo, the physical coordinates of the position of each pixel point on the shelf can be calculated as follows:
1. the physical offset distance of the pixel abscissa relative to the camera center in the shelf direction can be calculated according to the trigonometric function relation through the physical distance between the high-definition camera and the shelf and the included angle between the pixel x-coordinate and the image horizontal center point.
2. The actual physical height is calculated through the pixel y coordinate, and the actual physical assembly height of each high-definition camera is required, namely, the optical center height Hc of the high-definition camera, and the pixel coordinate value Yc of the photo y axis corresponding to the height (if the high-definition camera is strictly horizontally installed, the Yc coordinate value is half of the longitudinal resolution of the photo). The physical offset distance of each pixel coordinate y relative to the physical height of the optical center of the corresponding high-definition camera can be calculated according to the trigonometric function relation through the included angle between the pixel coordinates y and Yc of the price tag and the commodity row surface and the physical distance between the high-definition camera and the goods shelf, so that the actual physical height corresponding to y is calculated.
The technical process of the physical coordinates of the pixel points is specifically illustrated as follows:
taking fig. 9 and 11 as an example, how the vertex of a price tag is converted from two-dimensional pixel coordinates in an image to three-dimensional physical coordinates, and how the related camera internal and external parameters are used in this calculation process, and the calculation and use of the trigonometric function relationships described above are described.
Fig. 9 is a plan view of a robot image capturing rack, and the coordinate relationship between the drawing and the robot map corresponds to each other. The line segment Q in the figure is the horizontal direction width between the camera optical center in the image shot by the high-definition camera and a certain point on the shelf (the line segment is parallel to the x-axis of the photo coordinate system of the high-definition camera), fig. 11 is the vertical direction view of the robot shooting shelf, and the line segment T in the figure is the vertical direction height between the camera optical center in the image shot by the high-definition camera and a certain point on the shelf (the line segment is parallel to the y-axis of the photo coordinate system of the high-definition camera).
As can be seen from fig. 12, each pixel point in the screen of the high-definition camera can calculate its physical coordinates corresponding to the three-dimensional space of the robot map coordinate system according to the following principle. Point C in fig. 12 is a pixel point corresponding to a projection of the camera center axis horizontal to the ground on the shelf row photo, and is designated as point C. The point C in fig. 12 is the same point as the point designated Cx in fig. 9, and the point designated Cy in fig. 11 is the projected point of the same point at different viewing angles.
1. Assuming that the two-dimensional pixel coordinates of the image of the C point on the screen shot by the high-definition camera are (xc, yc), it is known that:
xc=Wp÷2
Where Wp is the lateral pixel width of the high-definition camera, for example, the lateral pixel width of the high-definition camera is 4096, wp=4096. The pixel value of Yc corresponds to the physical height of the high definition camera, and the correspondence between this pixel value and the physical height of the camera is unchanged, no matter how far the robot is from the shelf.
2. The coordinates of the point C on the map coordinate system of the robot in the overlooking angle are named (xr ', yr'), the center point of the current position of the robot is named point O, and the coordinates of the point on the map coordinate system are named (xr, yr). The center point coordinates (xr, yr) of the robot are known information (i.e., the coordinates of the photographing point obtained in the "robot inspection flow chart" of fig. 5). The coordinates (xr ', yr') of the C point on the map can be calculated according to the distance Lr2s between the robot center point and the shelf (i.e. the length of line segment B in fig. 9), and the angle thetar in fig. 9.
3. See the following formula: lr2 s=ld2s+ld2r, it can be further deduced that:
ld2s is the distance from the position of the coordinate point of the high-definition camera on the robot to the shelf in the map coordinate system. Ld2r is the length of the line segment N in FIG. 8, and is a parameter in the robot camera external parameters. The advancing direction of the robot is parallel to the shelf channel, so that the angle of the advancing direction of the robot is calculated according to the connecting slope of the left end point and the right end point of the shelf channel, namely, the included angle between the shelf channel and the right direction of the map coordinate system can be calculated and stored in the robot to be directly read when the photographing starting point and the terminal point of the robot are calculated in industrial investigation deployment. The angle thetar in figure 9 is found to be equal to 90 deg. from the planar geometry, added to the right hand angle of the pallet channel in the map coordinate system.
4. From the geometrical relationship, it can be seen that:
xr`=xr-cosθr×Lr2s
yr`=yr-sinθr×Lr2s
let cos θr=β; -sin thetar=α then there is
xr`=xr-β×Lr2s
yr`=yr+α×Lr2s
Thus, the corresponding relation between the two-dimensional pixel x-axis coordinate of the photo center point of the high-definition camera and the coordinate of the center point of the robot chassis is as follows:
Xc→(xr-β×Lr2s,yr+α×Lr2s)
5. based on the relation, the coordinate values on the robot map corresponding to the x-axis coordinates of the two-dimensional pixels on all the photos can be calculated, and the calculation method is as follows:
the actual physical length of the line segment Q in fig. 9, defined as Lc12Cx (corresponding to the physical distance between the point C1 photographed by the image of the high-definition camera and the horizontal center point Cx of the image), can be calculated from the angle θx between the point C1 and the point Cx and the distance Lc2s between the high-definition camera and the shelf;
6. further, how Lc2s is calculated is described taking the example that the robot is provided with a depth camera;
by means of the formula: lc2s=ld2s+lc2d,
wherein Lc2d is the distance relationship between the high-definition camera and the depth camera in the chassis radial direction, which can be obtained by calculating the T matrix in the RT matrix of the high-definition camera and the depth camera with the camera being externally referred to.
Thus: lc12cx=Lc2s×tan (θx)
7. Assuming that the x-axis pixel coordinate of the C1 point on the high-definition camera photograph in fig. 9 is x1, the pixel distance between the C1 point and the Cx point is xc-x1, and the horizontal width of the camera corresponding to the horizontal imaging of the pixel, defined as lfocal_h, is fixed (determined by the angle of view of the lens and the resolution of the photograph, the value of lfocal_h is fixed, which is a fixed reference of the lens and the camera).
8. Assuming that the reference lateral field angle of the high definition camera is θ fov _h, lfocal_h can be calculated by the following formula:
Lfocal_h=(Wp/2)÷tan(θfov_h/2)
tan (θx) = (xc-x 1)/(lfocal_h)
Then, it is possible to obtain:
Lc12cx=Lc2s×((xc-x1)÷Lfocal_h)
9. the coordinates (x 1, y 1) of point C1 on the robot map can be calculated as follows:
x1=xr`+α×Lc12cx
y1=yr`+β×Lc12cx
10. if the C1 point is the top left corner vertex position of a price tag on the high-definition camera photo, the coordinates (x 1, y 1) of the price tag on the robot map (i.e. the spatial top view of the entire sports ground of the robot) can be obtained.
11. To obtain the complete spatial coordinate information of the price tag, the physical height z1 of the price tag is also calculated by the y-axis coordinate of the C1 point on the photo of the high-definition camera.
The method for calculating the converted price tag height is described as follows:
1. the length Lc12Cy of the line segment T in fig. 11 (corresponding to the physical distance between the C1 point photographed by the image of the high-definition camera and the vertical center point Cy of the image) can be calculated from the angle θy between the C1 point and the Cy point and the distance Lc2s between the high-definition camera and the shelf:
Lc12cy=Lc2s×tan(θy)
2. assuming that the y-axis pixel coordinate of the C1 point on the high-definition camera photograph in fig. 11 is y1, the pixel distance between the C1 point and the C point is y1-yc, and the vertical pixel height lfocal_v of the camera corresponding to the vertical direction imaging of the pixel is also fixed.
3. Assuming that the reference vertical field angle of the high definition camera is θ fov _v and the vertical pixel resolution is Hp, lfocal_v can be calculated by the following formula:
Lfocal_v=(Hp/2)÷tan(θfov_v/2)
tan (θy) = (y 1-yc)/(lfocal_v)
It is then possible to determine:
Lc12cy=Lc2s×((y1-yc)÷Lfocal_v)
4. since the physical height of point C is the mounting height of the high definition camera (referred to as zc), this value is known as an external reference to the camera. Then the physical height z1 of the C1 point can be obtained by calculating the physical height of the C point:
z1=zc-Lc12cy
5. thus, we obtain the three-dimensional space coordinates (x 1, y1, z 1) of the top left corner vertex of the price tag, which corresponds to the Cartesian right hand coordinate system (-x 1, -z1, y 1).
After the space coordinate information of all the price tags is constructed, the background can present a space layout map of all the price tags, as shown in fig. 14 and 15; an exemplary diagram showing the electronic price tag image and the detection result representing the clarity of the electronic price tag image under the space layout map is shown in fig. 14; in fig. 15, the electronic price tag image and the corresponding detection result under the space layout map are shown, for example, the detection result indicates two cases that the electronic price tag image is clear and the electronic price tag image is not clear.
5. Price tag location and abnormal price tag alarm information reporting
And after the robot locally processes all the information, uploading the space coordinate information of all the price tags and the marking information data of the abnormal price tags to the cloud background of the electronic price tag system through network communication. The cloud background may display all the price layout and the price location information with the anomaly alert, as shown in fig. 15 (including the number of the shelf to which the price is attached, and the number of the columns of the layer of the shelf).
The processing flow after the price tag system cloud background on the price tag data is shown in fig. 16, and the processing flow comprises the following steps: reporting robot data; reminding the abnormal price tag through the cloud back platform of the electronic price tag system, and displaying the shelf diagram of the abnormal price tag for manual review and confirmation; and after confirmation, the price label is manually replaced according to the shelf position of the price label.
The scheme of the invention can accurately solve the operation and maintenance pain points of the electronic price tag products, and improve the maintenance efficiency of the electronic price tag for finding and eliminating abnormal display. The durability of the customer relationship is maintained, and the operation and maintenance cost brought by customer complaint claims and the like is reduced.
Of course, it is to be understood that other variations of the above detailed procedures are also possible, and all related variations should fall within the protection scope of the present invention.
In the embodiment of the invention, the goods shelf channel inspection data collected by the robot is obtained; the robot acquires inspection data of the goods shelf channels based on different goods shelf channels in a preset target site map; the inspection data comprise shelf images and robot pose information when the shelf images are collected; electronic price tag image recognition is carried out on the goods shelf channel inspection data, and a plurality of electronic price tag images are obtained; inputting each electronic price tag image into an electronic price tag identification neural network model to obtain a detection result of each electronic price tag image; the electronic price tag recognition neural network model is obtained by training the deep learning neural network model by taking a historical electronic price tag detection data set as training data; the historical electronic price tag detection dataset comprises: different historical electronic price tag images and corresponding detection results; according to the robot pose information of the shelf image corresponding to each electronic price tag image, establishing a space layout image of the electronic price tag; the spatial layout image of the electronic price tag is used for displaying the spatial coordinate information of each electronic price tag image constructed by the pose information of the robot; in the space layout image of the price tag, the detection result of each electronic price tag image is displayed, compared with the technical scheme of manually inspecting the electronic price tag in the prior art, the electronic price tag image is identified from inspection data acquired by a robot, and the electronic price tag is automatically detected by means of an electronic price tag identification neural network model, so that the efficiency and the precision of electronic price tag detection are improved, the problem of error detection caused by manual inspection in the prior art is avoided, the economic loss to customers using the electronic price tag due to abnormal display of the price tag is reduced, and the detection cost of the electronic price tag is also reduced.
The embodiment of the invention also provides an electronic price tag detection device, which is expressed in the following embodiment. Because the principle of the device for solving the problems is similar to that of the electronic price tag detection method, the implementation of the device can refer to the implementation of the electronic price tag detection method, and the repetition is omitted.
The embodiment of the invention also provides an electronic price tag detection device, which is used for improving the efficiency and the precision of electronic price tag detection and reducing the detection cost of the electronic price tag, as shown in fig. 17, and comprises the following steps:
the data acquisition module 1701 is used for acquiring shelf channel inspection data acquired by the robot; the robot acquires inspection data of the goods shelf channels based on different goods shelf channels in a preset target site map; the inspection data comprise shelf images and robot pose information when the shelf images are collected;
the electronic price tag image recognition module 1702 is configured to perform electronic price tag image recognition on the shelf channel inspection data to obtain a plurality of electronic price tag images;
the electronic price tag image detection module 1703 is configured to input each electronic price tag image into the electronic price tag identification neural network model to obtain a detection result of each electronic price tag image; the electronic price tag recognition neural network model is obtained by training the deep learning neural network model by taking a historical electronic price tag detection data set as training data; the historical electronic price tag detection dataset comprises: different historical electronic price tag images and corresponding detection results;
The space layout image establishing module 1704 is configured to establish a space layout image of each electronic price tag according to the pose information of the robot of the shelf image corresponding to each electronic price tag image; the spatial layout image of the electronic price tag is used for displaying the spatial coordinate information of each electronic price tag image constructed by the pose information of the robot;
and the detection result display module 1705 is configured to display a detection result of each electronic price tag image in the spatial layout image of the price tag.
In one embodiment, the robot is for:
based on the position moving instruction, carrying out laser scanning on a goods shelf in the target site map by using the carried laser radar, and generating a laser scanning image of the target site map;
and carrying out inspection on the goods shelf channel based on the goods shelf channel marked in the laser scanning image to obtain the inspection data of the goods shelf channel.
In one embodiment, the robot pose information includes robot position coordinates and a robot orientation angle.
In one embodiment, the electronic price tag image recognition is performed on the shelf channel inspection data to obtain a plurality of electronic price tag images, including:
based on a price tag extrusion strip depth algorithm model, carrying out electronic price tag image identification on the shelf channel inspection data to obtain different shelf extrusion strip images and corresponding extrusion strip position information in shelf images;
And based on a visual AI algorithm, carrying out image cutting and splicing on the shelf extrusion bar images to obtain a plurality of electronic price tag images.
In one embodiment, the creating a spatial layout image of each electronic price tag according to the robot pose information of the shelf image corresponding to each electronic price tag image includes:
according to the robot pose information of the shelf image corresponding to each electronic price tag image, the internal and external parameters of the camera and the pixel information of the shelf image, calculating the three-dimensional space position coordinates of each electronic price tag image; the three-dimensional space position coordinates of the electronic price tag image are used for reflecting the actual position of the electronic price tag in the space layout image;
and combining the three-dimensional space position coordinates of each electronic price tag image to establish a space layout image of the electronic price tag.
In one embodiment, calculating three-dimensional spatial position coordinates of each electronic price tag image according to robot pose information of a shelf image corresponding to each electronic price tag image, internal and external parameters of a camera and pixel information of the shelf image comprises:
aiming at each electronic price tag image, calculating a transverse component and a longitudinal component in the actual space coordinate of the electronic price tag according to the robot pose information of a shelf image corresponding to the electronic price tag image, the pixel information of the shelf image, and focal distance parameters and transverse fov angle parameters in internal and external parameters of a camera;
And calculating the height component in the actual space coordinate of the electronic price tag according to the robot pose information of the shelf image corresponding to the electronic price tag image, the pixel information of the shelf image, the longitudinal pixel resolution parameter, the longitudinal fov angle parameter and the camera mounting height in the internal and external parameters of the camera.
In one embodiment, further comprising: an alarm module for:
and alarming the electronic price tag with unclear electronic price tag image represented by the detection result of the electronic price tag image in the space layout image of the price tag.
The embodiment of the invention provides a computer device for realizing all or part of contents in the electronic price tag detection method, which specifically comprises the following contents:
a processor (processor), a memory (memory), a communication interface (Communications Interface), and a bus; the processor, the memory and the communication interface complete communication with each other through the bus; the communication interface is used for realizing information transmission between related devices; the computer device may be a desktop computer, a tablet computer, a mobile terminal, or the like, and the embodiment is not limited thereto. In this embodiment, the computer device may be implemented with reference to the embodiment for implementing the electronic price tag detection method and the embodiment for implementing the electronic price tag detection apparatus, and the contents thereof are incorporated herein, and the repetition is omitted.
Fig. 10 is a schematic block diagram of a system configuration of a computer device 1000 of an embodiment of the present application. As shown in fig. 10, the computer device 1000 may include a central processor 1001 and a memory 1002; the memory 1002 is coupled to the central processor 1001. Notably, this fig. 10 is exemplary; other types of structures may also be used in addition to or in place of the structures to implement telecommunications functions or other functions.
In one embodiment, the electronic price tag detection functionality may be integrated into the central processor 1001. The central processor 1001 may be configured to control, among other things, the following:
acquiring shelf channel inspection data acquired by a robot; the robot acquires inspection data of the goods shelf channels based on different goods shelf channels in a preset target site map; the inspection data comprise shelf images and robot pose information when the shelf images are collected;
electronic price tag image recognition is carried out on the goods shelf channel inspection data, and a plurality of electronic price tag images are obtained;
inputting each electronic price tag image into an electronic price tag identification neural network model to obtain a detection result of each electronic price tag image; the electronic price tag recognition neural network model is obtained by training the deep learning neural network model by taking a historical electronic price tag detection data set as training data; the historical electronic price tag detection dataset comprises: different historical electronic price tag images and corresponding detection results;
According to the robot pose information of the shelf image corresponding to each electronic price tag image, establishing a space layout image of the electronic price tag; the spatial layout image of the electronic price tag is used for displaying the spatial coordinate information of each electronic price tag image constructed by the pose information of the robot;
and displaying the detection result of each electronic price tag image in the space layout image of the price tag.
In another embodiment, the electronic price tag detection apparatus may be configured separately from the central processor 1001, for example, the electronic price tag detection apparatus may be configured as a chip connected to the central processor 1001, and the electronic price tag detection function is implemented by control of the central processor.
As shown in fig. 10, the computer device 1000 may further include: a communication module 1003, an input unit 1004, an audio processor 1005, a display 1006, a power supply 1007. It is noted that the computer device 1000 need not include all of the components shown in FIG. 10; in addition, the computer device 1000 may further include components not shown in fig. 10, to which reference is made to the related art.
As shown in fig. 10, the central processor 1001, sometimes also referred to as a controller or operational control, may include a microprocessor or other processor device and/or logic device, and the central processor 1001 receives input and controls the operation of the various components of the computer device 1000.
The memory 1002 may be, for example, one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, or other suitable device. The information about failure may be stored, and a program for executing the information may be stored. And the central processor 1001 can execute the program stored in the memory 1002 to realize information storage or processing, and the like.
The input unit 1004 provides input to the central processor 1001. The input unit 1004 is, for example, a key or a touch input device. The power supply 1007 is used to provide power to the computer device 1000. The display 1006 is used for displaying display objects such as images and characters. The display may be, for example, but not limited to, an LCD display.
The memory 1002 may be a solid state memory such as Read Only Memory (ROM), random Access Memory (RAM), SIM card, and the like. But also a memory which holds information even when powered down, can be selectively erased and provided with further data, an example of which is sometimes referred to as EPROM or the like. Memory 1002 may also be some other type of device. Memory 1002 includes a buffer memory 1021 (sometimes referred to as a buffer). The memory 1002 may include an application/function storage 1022, the application/function storage 1022 for storing application programs and function programs or for executing a flow of operations of the computer apparatus 1000 by the central processor 1001.
The memory 1002 may also include a data store 1023, the data store 1023 for storing data such as contacts, digital data, pictures, sounds, and/or any other data used by a computer device. The driver store 1024 of the memory 1002 can include various drivers for the computer device for communication functions and/or for performing other functions of the computer device (e.g., messaging applications, address book applications, etc.).
The communication module 1003 is a transmitter/receiver 1003 that transmits and receives signals via an antenna 1008. A communication module (transmitter/receiver) 1003 is coupled to the central processor 1001 to provide an input signal and receive an output signal, which may be the same as in the case of a conventional mobile communication terminal.
Based on different communication technologies, a plurality of communication modules 1003, such as a cellular network module, a bluetooth module, and/or a wireless lan module, etc., may be provided in the same computer device. The communication module (transmitter/receiver) 1003 is also coupled to a speaker 1009 and a microphone 1010 via an audio processor 1005 to provide audio output via the speaker 1009 and to receive audio input from the microphone 1010 to implement usual telecommunications functionality. The audio processor 1005 may include any suitable buffers, decoders, amplifiers and so forth. In addition, an audio processor 1005 is also coupled to the central processor 1001 so that sound can be recorded locally through the microphone 1010 and so that sound stored locally can be played through the speaker 1009.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the electronic price tag detection method is realized when the computer program is executed by a processor.
The embodiment of the invention also provides a computer program product, which comprises a computer program, and the computer program realizes the electronic price tag detection method when being executed by a processor.
In the embodiment of the invention, the goods shelf channel inspection data collected by the robot is obtained; the robot acquires inspection data of the goods shelf channels based on different goods shelf channels in a preset target site map; the inspection data comprise shelf images and robot pose information when the shelf images are collected; electronic price tag image recognition is carried out on the goods shelf channel inspection data, and a plurality of electronic price tag images are obtained; inputting each electronic price tag image into an electronic price tag identification neural network model to obtain a detection result of each electronic price tag image; the electronic price tag recognition neural network model is obtained by training the deep learning neural network model by taking a historical electronic price tag detection data set as training data; the historical electronic price tag detection dataset comprises: different historical electronic price tag images and corresponding detection results; according to the robot pose information of the shelf image corresponding to each electronic price tag image, establishing a space layout image of the electronic price tag; the spatial layout image of the electronic price tag is used for displaying the spatial coordinate information of each electronic price tag image constructed by the pose information of the robot; in the space layout image of the price tag, the detection result of each electronic price tag image is displayed, compared with the technical scheme of manually inspecting the electronic price tag in the prior art, the electronic price tag image is identified from inspection data acquired by a robot, and the electronic price tag is automatically detected by means of an electronic price tag identification neural network model, so that the efficiency and the precision of electronic price tag detection are improved, the problem of error detection caused by manual inspection in the prior art is avoided, the economic loss to customers using the electronic price tag due to abnormal display of the price tag is reduced, and the detection cost of the electronic price tag is also reduced.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (11)

1. An electronic price tag detection method, comprising:
acquiring shelf channel inspection data acquired by a robot; the robot acquires inspection data of the goods shelf channels based on different goods shelf channels in a preset target site map; the inspection data comprise shelf images and robot pose information when the shelf images are collected;
electronic price tag image recognition is carried out on the goods shelf channel inspection data, and a plurality of electronic price tag images are obtained;
inputting each electronic price tag image into an electronic price tag identification neural network model to obtain a detection result of each electronic price tag image; the electronic price tag recognition neural network model is obtained by training the deep learning neural network model by taking a historical electronic price tag detection data set as training data; the historical electronic price tag detection dataset comprises: different historical electronic price tag images and corresponding detection results;
according to the robot pose information of the shelf image corresponding to each electronic price tag image, establishing a space layout image of the electronic price tag; the spatial layout image of the electronic price tag is used for displaying the spatial coordinate information of each electronic price tag image constructed by the pose information of the robot;
And displaying the detection result of each electronic price tag image in the space layout image of the price tag.
2. The method of claim 1, wherein the robot is to:
based on the position moving instruction, carrying out laser scanning on a goods shelf in the target site map by using the carried laser radar, and generating a laser scanning image of the target site map;
and carrying out inspection on the goods shelf channel based on the goods shelf channel marked in the laser scanning image to obtain the inspection data of the goods shelf channel.
3. The method of claim 1, wherein the robot pose information includes robot position coordinates and robot orientation angles.
4. The method of claim 1, wherein electronically identifying the shelf aisle inspection data to obtain a plurality of electronic price tag images, comprising:
based on a price tag extrusion strip depth algorithm model, carrying out electronic price tag image identification on the shelf channel inspection data to obtain different shelf extrusion strip images and corresponding extrusion strip position information in shelf images;
and based on a visual AI algorithm, carrying out image cutting and splicing on the shelf extrusion bar images to obtain a plurality of electronic price tag images.
5. The method of claim 1, wherein creating a spatial layout image of the electronic price tag based on the robot pose information of the shelf image corresponding to each electronic price tag image comprises:
according to the robot pose information of the shelf image corresponding to each electronic price tag image, the internal and external parameters of the camera and the pixel information of the shelf image, calculating the three-dimensional space position coordinates of each electronic price tag image; the three-dimensional space position coordinates of the electronic price tag image are used for reflecting the actual position of the electronic price tag in the space layout image;
and combining the three-dimensional space position coordinates of each electronic price tag image to establish a space layout image of the electronic price tag.
6. The method of claim 5, wherein calculating three-dimensional spatial position coordinates of each electronic price tag image based on robot pose information of a shelf image corresponding to each electronic price tag image, internal and external parameters of a camera, and pixel information of the shelf image, comprises:
aiming at each electronic price tag image, calculating a transverse component and a longitudinal component in the actual space coordinate of the electronic price tag according to the robot pose information of a shelf image corresponding to the electronic price tag image, the pixel information of the shelf image, and focal distance parameters and transverse fov angle parameters in internal and external parameters of a camera;
And calculating the height component in the actual space coordinate of the electronic price tag according to the robot pose information of the shelf image corresponding to the electronic price tag image, the pixel information of the shelf image, the longitudinal pixel resolution parameter, the longitudinal fov angle parameter and the camera mounting height in the internal and external parameters of the camera.
7. The method as recited in claim 1, further comprising:
and alarming the electronic price tag with unclear electronic price tag image represented by the detection result of the electronic price tag image in the space layout image of the price tag.
8. An electronic price tag detection device, comprising:
the data acquisition module is used for acquiring the goods shelf channel inspection data acquired by the robot; the robot acquires inspection data of the goods shelf channels based on different goods shelf channels in a preset target site map; the inspection data comprise shelf images and robot pose information when the shelf images are collected;
the electronic price tag image recognition module is used for carrying out electronic price tag image recognition on the goods shelf channel inspection data to obtain a plurality of electronic price tag images;
the electronic price tag image detection module is used for inputting each electronic price tag image into the electronic price tag identification neural network model to obtain a detection result of each electronic price tag image; the electronic price tag recognition neural network model is obtained by training the deep learning neural network model by taking a historical electronic price tag detection data set as training data; the historical electronic price tag detection dataset comprises: different historical electronic price tag images and corresponding detection results;
The space layout image establishing module is used for establishing a space layout image of each electronic price tag according to the robot pose information of the shelf image corresponding to each electronic price tag image; the spatial layout image of the electronic price tag is used for displaying the spatial coordinate information of each electronic price tag image constructed by the pose information of the robot;
the detection result display module is used for displaying the detection result of each electronic price tag image in the space layout image of the price tag.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method of any of claims 1 to 7.
11. A computer program product, characterized in that the computer program product comprises a computer program which, when executed by a processor, implements the method of any of claims 1 to 7.
CN202311330806.2A 2023-10-13 2023-10-13 Electronic price tag detection method and device Pending CN117636373A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311330806.2A CN117636373A (en) 2023-10-13 2023-10-13 Electronic price tag detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311330806.2A CN117636373A (en) 2023-10-13 2023-10-13 Electronic price tag detection method and device

Publications (1)

Publication Number Publication Date
CN117636373A true CN117636373A (en) 2024-03-01

Family

ID=90036607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311330806.2A Pending CN117636373A (en) 2023-10-13 2023-10-13 Electronic price tag detection method and device

Country Status (1)

Country Link
CN (1) CN117636373A (en)

Similar Documents

Publication Publication Date Title
US9448758B2 (en) Projecting airplane location specific maintenance history using optical reference points
US10019803B2 (en) Store shelf imaging system and method using a vertical LIDAR
Becerik-Gerber et al. Assessment of target types and layouts in 3D laser scanning for registration accuracy
JP5337805B2 (en) Local positioning system and method
US9482754B2 (en) Detection apparatus, detection method and manipulator
US12014320B2 (en) Systems, devices, and methods for estimating stock level with depth sensor
JP2009169845A (en) Autonomous mobile robot and map update method
CN110716559B (en) Comprehensive control method for shopping mall and supermarket goods picking robot
US11209277B2 (en) Systems and methods for electronic mapping and localization within a facility
EP3657455A1 (en) Methods and systems for detecting intrusions in a monitored volume
CN112445204B (en) Object movement navigation method and device in construction site and computer equipment
JP6601613B2 (en) POSITION ESTIMATION METHOD, POSITION ESTIMATION DEVICE, AND POSITION ESTIMATION PROGRAM
JP2017004228A (en) Method, device, and program for trajectory estimation
CN114494466B (en) External parameter calibration method, device and equipment and storage medium
CN114972421A (en) Workshop material identification tracking and positioning method and system
Li et al. Evaluation of photogrammetry for use in industrial production systems
Shah et al. Condition assessment of ship structure using robot assisted 3D-reconstruction
CN112685527A (en) Method, device and electronic system for establishing map
CN112381873A (en) Data labeling method and device
CN117636373A (en) Electronic price tag detection method and device
CN112489240B (en) Commodity display inspection method, inspection robot and storage medium
US20220212811A1 (en) Fuel receptacle and boom tip position and pose estimation for aerial refueling
US20220180559A1 (en) On-Site Calibration for Mobile Automation Apparatus
KR200488998Y1 (en) Apparatus for constructing indoor map
Liu et al. Vision information and laser module based UAV target tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination