CN114022767A - Elevator floor positioning method and device, terminal equipment and storage medium - Google Patents

Elevator floor positioning method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN114022767A
CN114022767A CN202111300521.5A CN202111300521A CN114022767A CN 114022767 A CN114022767 A CN 114022767A CN 202111300521 A CN202111300521 A CN 202111300521A CN 114022767 A CN114022767 A CN 114022767A
Authority
CN
China
Prior art keywords
image
floor
elevator
display panel
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111300521.5A
Other languages
Chinese (zh)
Inventor
储子翔
金超
徐玮
唐旋来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Keenlon Intelligent Technology Co Ltd
Original Assignee
Shanghai Keenlon Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Keenlon Intelligent Technology Co Ltd filed Critical Shanghai Keenlon Intelligent Technology Co Ltd
Priority to CN202111300521.5A priority Critical patent/CN114022767A/en
Publication of CN114022767A publication Critical patent/CN114022767A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Indicating And Signalling Devices For Elevators (AREA)

Abstract

The application is suitable for the technical field of elevator floor intelligent identification, and provides an elevator floor positioning method, an elevator floor positioning device, terminal equipment and a storage medium. In the embodiment of the application, the image information is acquired through the image acquisition device arranged in the elevator, and the display panel image and the floor key image are extracted from the image information; respectively carrying out image recognition on the display panel image and the key image, and determining recognition results of the display panel image and the key image; and determining a floor positioning result according to the display panel image and the identification result of the key image, acquiring a positioning request sent by the robot, and sending a corresponding floor positioning result to the robot according to the positioning request, so that the accuracy of the elevator floor positioning result is improved.

Description

Elevator floor positioning method and device, terminal equipment and storage medium
Technical Field
The application belongs to the technical field of elevator floor intelligent identification, and particularly relates to an elevator floor positioning method and device, terminal equipment and a storage medium.
Background
With the development of society, robots are more and more common in people's lives, people are more and more adapted to assist people's work by using the robots, and with the improvement of demands, the robots need to have the ability of taking elevators independently, so how to accurately acquire the current floor information of the elevators, and thus, the robots are prompted to take the elevators independently according to the information, which becomes an important factor for determining that the robots take the elevators independently.
At present, an image recognition unit is arranged on a robot shell and used for recognizing elevator data, so that the robot can carry out the operation of getting in and out of an elevator according to the recognized data. However, because the image recognition unit is moving all the time, the deviation of the picture information acquired at different times is overlarge, and the picture information is blocked due to the height of the robot and the blocking of the elevator personnel, so that the recognized data is not accurate enough, and the accuracy of the elevator floor positioning result is low.
Disclosure of Invention
The embodiment of the application provides an elevator floor positioning method, an elevator floor positioning device, terminal equipment and a storage medium, and can solve the problem that the accuracy of an elevator floor positioning result is low.
In a first aspect, an embodiment of the present application provides an elevator floor positioning method, including:
acquiring image information through an image acquisition device arranged in an elevator, and extracting a display panel image and a floor key image from the image information;
respectively carrying out image recognition on the display panel image and the key image, and determining recognition results of the display panel image and the key image;
determining a floor positioning result according to the display panel image and the identification result of the key image;
and acquiring a positioning request sent by the robot, and sending a corresponding floor positioning result to the robot according to the positioning request.
In a second aspect, an embodiment of the present application provides an elevator floor positioning device, including:
the extraction module is used for acquiring image information through an image acquisition device arranged in the elevator and extracting a display panel image and a floor key image from the image information;
the identification module is used for respectively carrying out image identification on the display panel image and the key image and determining the identification results of the display panel image and the key image;
the result determining module is used for determining a floor positioning result according to the display panel image and the identification result of the key image;
and the acquisition module is used for acquiring the positioning request sent by the robot and sending the corresponding floor positioning result to the robot according to the positioning request.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements any of the steps of the elevator floor location method when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, implements the steps of any one of the elevator floor location methods.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute any one of the elevator floor location methods of the first aspect.
In the embodiment of the application, the image information is acquired through the image acquisition device arranged in the elevator, so that the problem of overlarge deviation of the image information acquired at different times is solved, and the display panel image and the floor key image are extracted from the image information so as to eliminate the interference image in the image information and improve the accuracy of image identification. The image recognition is respectively carried out on the display panel image and the key image so as to achieve the double detection of the current floor, after the recognition results of the display panel image and the key image are determined, the floor positioning result is determined according to the recognition results of the display panel image and the key image, so that the accuracy of the elevator floor positioning result is further improved, and after the positioning request sent by the robot is obtained, the corresponding floor positioning result is sent to the robot according to the positioning request, so that the robot executes the corresponding operation according to the elevator floor positioning result with higher accuracy.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a first schematic flow chart of an elevator floor positioning method provided by an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a positioning apparatus provided in an embodiment of the present application;
fig. 3 is a second flowchart of an elevator floor positioning method provided by an embodiment of the present application;
fig. 4 is a first structural schematic diagram of an elevator floor display panel provided by the embodiment of the application;
fig. 5 is a second structural schematic diagram of an elevator floor display panel provided by the embodiment of the application;
fig. 6 is a schematic structural diagram of an elevator floor positioning device provided by an embodiment of the application;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Fig. 1 is a schematic flow chart of an elevator floor positioning method in an embodiment of the present application, where an execution main body of the method may be a terminal device, such as a positioning device, and the present embodiment is described by taking the positioning device as an example. As shown in fig. 1, the elevator floor location method may include the steps of:
and S101, acquiring image information through an image acquisition device arranged in the elevator, and extracting a display panel image and a floor key image from the image information.
In the present embodiment, as shown in fig. 2, the positioning device 10 is disposed in the elevator car 12, and image information is acquired by an image acquisition device 101 installed on the elevator ceiling 13 in the device, and the image acquisition device 101 may employ a digital camera module, and image information in the field of view is captured by the digital camera module. The acquired image information is processed by the image processing means 102 in the positioning device 10, and the display panel 14 image and the floor key 15 image are extracted from the image information, so that the positioning device 10 realizes floor positioning according to the acquired display panel 14 image and floor key 15 image. The image information comprises the combination of at least two of an elevator floor display panel 14 image, a floor button 15 image and an elevator car door 16 image; the positioning apparatus 10 includes an image acquisition device 101, an image processing device 102, an image recognition device 103, a storage device 104, and a wireless communication device 105. By arranging the positioning device 10 in the elevator car 12, the construction outside the elevator can be avoided, so that the cost is low, and the positioning device 10 can be suitable for elevators of all models on the market after being simply deployed and arranged on site, and is not required to be developed for many times due to different elevator protocols. The positioning device can be deployed separately, the image acquisition device is installed in the elevator car 12, and the processing device including the image processing device 102, the image recognition device 103, the storage device 104 and the wireless communication device 105 in the positioning device is installed at the position of the robot body or other suitable positions, which can be specifically set according to the user requirements.
Illustratively, the extracting the display panel 14 image and the floor key 15 image from the image information may include: feature patterns of a preset specification in the image information are extracted by using hough transform through an image processing device 102 in the positioning apparatus 10. And then, the image information is cut according to the extracted characteristic graph, so that the display panel 14 image and the key image in the image information are determined. Wherein, the display panel displays the floor number of the elevator; the key image displays the on-off state of the background light of the key corresponding to the selected floor; the feature pattern with the preset specification comprises at least one of feature lines, feature shapes and feature points. For example, feature points are set on the display panel 14 inside the elevator, and the image processing apparatus 102 can obtain a desired image area, which may be an aoi (area of interest) image, from the set feature points by hough transform.
It is understood that the feature pattern of the preset specification required for the image processing means 102 to process the image needs to be stored on the storage means 104 in the positioning apparatus 10 before the feature pattern of the preset specification in the image information is extracted. In order to improve the accuracy of image extraction, a feature pattern of a preset specification can be determined by using a current elevator scene image, for example, after the positioning device 10 is installed in an elevator, the current image is acquired by using the image acquisition device 101 in the positioning device 10, a feature line of the elevator floor display panel 14 is identified by the image processing device 102, so that the size of the feature line is obtained, and the size of the feature line is input into the storage unit, so that the image of the display panel 14 can be extracted according to the feature line at a later stage.
Illustratively, the extracting the display panel 14 image and the floor key 15 image from the image information may include: the image processing means 102 in the pointing device 10 extracts the display panel 14 image and the floor key 15 image from the image information according to preset absolute coordinate values. Since the image pickup unit is fixed to the elevator ceiling 13 and the environment in which it is used is not easily changed, the captured image has a high degree of uniformity, and the desired image can be extracted based on the absolute coordinate values of the desired image area which is set in advance and stored in the storage unit.
It will be appreciated that the absolute coordinate values required for the image processing means 102 to process the images are stored on the storage means 104 in the pointing device 10 before the display panel 14 image and the floor key 15 image are extracted from the image information according to the preset absolute coordinate values. Specifically, after the positioning apparatus 10 is installed, the absolute coordinate position of the display screen on the image is input to the storage unit.
In one embodiment, in order to improve the accuracy of the image recognition performed by the image recognition device 103 and improve the corresponding processing speed, after the display panel 14 image and the floor key 15 image are extracted from the image information, the display panel 14 image and the floor key 15 image may be re-processed, which may specifically include: the image processing apparatus 102 may cut out images other than the display panel 14 image and the floor key 15 image; the image processing device 102 can also perform quadrilateral transformation on the display panel 14 image and the floor key 15 image to obtain a front view of the display panel 14 image and the floor key 15 image; the image processing device 102 can also convert the display panel 14 image and the floor key 15 image into gray-scale images respectively; the image processing device 102 can also reduce the size of the display panel 14 image and the floor key 15 image to accelerate the processing speed of the image processing device 102; the image processing device 102 may also perform erosion and expansion on the display panel 14 image and the floor key 15 image by using an image morphology processing method, so as to remove interference in the images and improve the accuracy of the later-stage result recognition.
In one embodiment, if the image of the display panel 14 and/or the image of the floor key 15 in the currently acquired image information are blocked, and thus the floor result corresponding to the image cannot be accurately determined, the image information can be acquired again for determination; probably because the shielding that elevator personnel are too much caused, or because there are factors such as spot in display panel 14, floor button 15 department, so can integrate alarm device, report to the police through alarm device in order to remind elevator personnel to remove, avoid image information to be sheltered from, or remind relevant staff to inspect.
Step S102, performing image recognition on the display panel 14 image and the key image, and determining recognition results of the display panel 14 image and the key image.
In this embodiment, the image recognition device 103 in the positioning apparatus 10 performs image recognition on the display panel 14 image and the key image respectively in a preset manner according to the floor image corresponding to each floor stored in the storage device 104, and determines the recognition result of the display panel 14 image and the key image, where the recognition result of the display panel 14 image is the floor currently displayed by the display panel 14; the identification result of the key image is a display change result of the background light of the elevator key; the preset modes comprise modes such as image pixel level comparison, machine learning, deep learning and the like; the floor images include images displayed on the floor display panels 14, images of the opening and closing states of the elevator doors 16, and images of the on and off states of the backlights of the floor keys 15.
It is understood that before the image recognition is performed on the display panel 14 image and the key image, respectively, each floor image required for the image recognition device 103 to recognize the image and the tag value set by the operator corresponding to each floor are stored in the storage device 104 of the pointing device 10, and the format of each floor image stored in advance is identical to the format of the image for recognition. Specifically, to improve the accuracy of image extraction, the current elevator can be used to determine images of each floor. For example, after the positioning device 10 is installed, the image capturing device 101 sequentially captures images displayed on the display panels 14 of the respective floors, performs image processing by the image processing device 102, sequentially sets floor tags for the processed images of the display panels 14 of the respective floors, and sets a floor tag to floor 1 if the current image of the display panel 14 is floor 1. Accordingly, the worker stores the images of the background lights on and off of the respective floor keys 15 and sets the floor labels in turn.
In one embodiment, as shown in fig. 3, the step S102 includes:
step S301, calculating a gray scale difference of each pixel point between the display panel 14 image and each preset floor panel image.
Step S302, the floor number corresponding to the floor panel image, in which the gray level difference of the preset number of pixel points is within the preset range, is used as the recognition result of the display panel 14 image.
In this embodiment, the image recognition device 103 compares the display panel 14 image with each of the preset floor panel images, and if the gray value deviation between the display panel 14 image and the pixel point of the preset floor panel image for comparison is within the threshold range, it can be considered that the gray levels of the pixel points compared by the two images are consistent, and further accumulate the number of the pixel points of the gray value deviation of the two images within the threshold range, and if the number is greater than or equal to the minimum number of the same pixel points under the condition that the images are consistent, it is considered that the images are consistent, that is, the display panel 14 image and the preset floor panel image for comparison are the same floor, where the minimum number of the same pixel points under the condition that the images are consistent is the preset number. When the two images are determined to be the same image, the floor number corresponding to the preset floor panel image for comparison is used as the identification result of the current display panel 14 image.
And step S303, calculating the gray level difference value of each pixel point between the key image and the preset key 15 image of each floor.
And step S304, taking the floor marking information corresponding to the floor key 15 image with the gray difference value of the pixel points of the preset number within the preset range as the identification result of the key image.
In this embodiment, the image recognition device 103 compares the key image with the preset images of the floor keys 15, and if the gray value deviation between the key image and the pixel points of the preset images of the floor keys 15 is within the threshold range, it is determined that the gray values of the pixel points compared by the two images are consistent, and further, the number of the pixel points with the gray value deviation between the two images within the threshold range is accumulated, and if the number is greater than or equal to the minimum number of the same pixel points under the condition that the images are consistent, it is determined that the images are consistent, that is, the key image and the preset images of the floor keys 15 are the same floor. And when the two images are determined to be the same image, taking the floor marking information corresponding to the preset floor key 15 image for comparison as the identification result of the current key image.
In one embodiment, the step S102 includes: the image recognition device 103 inputs the display panel 14 image and the key image to a network model preset therein for image recognition, so as to determine recognition results of the display panel 14 image and the key image. The network model can be obtained by training images of all floors in the elevator, so that the reliability of the output result of the image recognition device 103 is guaranteed.
Illustratively, the image recognition according to the network model may include: the image recognition device 103 reads parameters for training digital information stored in advance in the storage unit, and performs digital recognition on the image of the display panel 14 using a network model. Correspondingly, the machine learning parameters called by the image recognition device 103 and the machine learning parameters after the reinforcement training are used to train the network model, and further, the images of the corresponding scenes prestored on the storage unit can be called to perform the targeted reinforcement training.
Illustratively, the image recognition according to the network model may further include: the image recognition device 103 retrieves the floor images of the respective floors stored in advance in the storage means, and recognizes the display panel 14 image and the key image using a logistic regression or convolutional neural network. Accordingly, the image recognition device 103 calls each floor image and the corresponding floor label pre-stored in the storage device 104 to train the network model using a logistic regression or convolutional neural network.
In one embodiment, because of differences in ambient light, image recognition using pixel-level alignment may not yield accurate results, and image recognition using machine learning may yield erroneous results. Therefore, the function of real-time tracking and identifying the background light of the floor keys 15 of the elevator can be added by combining with an elevator user, so that the floor identification is assisted, and the accuracy of the floor identification is improved.
And step S103, determining a floor positioning result according to the identification result of the display panel 14 image and the key image.
In this embodiment, the current elevator floor can be determined according to the recognition result of the image of the display panel 14, and it is further determined according to the recognition result of the image of the key, whether the backlight of the key corresponding to the elevator floor determined by the image of the current display panel 14 is in an off state, and if so, it indicates that the current floor positioning result is the elevator floor determined by the image of the current display panel 14. It can be understood that, as shown in fig. 4 and 5, the display panel 14 may be of the broken code dot matrix type shown in fig. 4 or of the LCD screen shown in fig. 5, and the displayed digital fonts of the display panel 14 and the LCD screen are different, if only the image recognition based on the image of the display panel 14 is performed, the reliability of the result is low, so that the recognition result of the key image can be further used, and the floor positioning result can be determined more accurately.
In one embodiment, after step S103, comprising: the image processing device 102 in the positioning device 10 performs feature extraction on the image information, determines an elevator scene image in the image information, performs image recognition on the elevator scene image through the image recognition device 103 in the positioning device 10, thereby determining a recognition result of the elevator scene image, and if the recognition result of the elevator scene image is judged to be that a robot exists in the elevator, the positioning device needs to send a currently determined floor positioning result to the robot existing in the elevator.
And step S104, acquiring a positioning request sent by the robot, and sending a corresponding floor positioning result to the robot according to the positioning request.
In the present embodiment, the wireless communication device 105 in the positioning apparatus 10 acquires the positioning request transmitted by the robot 11 through the wireless communication device 105 provided on the robot, and the positioning apparatus 10 transmits the floor positioning result to the robot 11 through the wireless communication device 105 according to the positioning request, so that the robot 11 performs the corresponding operation according to the floor positioning result.
In one embodiment, after step S104, the method includes: the image processing device 102 in the positioning device 10 extracts the features of the image information, determines the elevator car door 16 image in the image information, and performs image recognition on the elevator car door 16 image through the image recognition device 103 in the positioning device 10, so as to determine the recognition result of the elevator car door 16 image, and the positioning device further determines the elevator car door state according to the recognition result of the elevator car door image. The positioning device 10 sends the determined elevator door state to the robot according to the request sent by the robot 11, so that the robot 11 executes corresponding operation according to the floor positioning result and the elevator door 16 state. When the elevator car door 16 is detected to be completely opened and the corresponding light state also meets the preset light state, the identification result of the elevator car door 16 image is judged to be the car door opening state.
For example, if the robot 11 is outside or inside the elevator, the robot 11 sends a request to the positioning device 10, and the positioning device 10 sends the recognized floor positioning result and the recognition result of the image of the elevator door 16 to the robot 11 through the wireless communication device 105 according to the request sent by the robot 11, so that the robot 11 can judge whether to enter or exit the elevator according to the floor positioning result and the recognition result of the image of the elevator door 16.
In one embodiment, after acquiring the positioning request sent by the robot in step S104, the method includes: when the positioning device 10 detects that the floor positioning result meets the target floor in the positioning request and the state of the elevator car door 16 is the car door opening state, the positioning device 10 sends a reminding signal to the robot 11 through the wireless communication device 105, so that the robot 11 executes corresponding operation according to the reminding signal.
Illustratively, if the robot 11 sends a request to the positioning device 10 when the robot 11 is outside or inside the elevator, the positioning device 10 determines according to the request sent by the robot 11, and when the recognized floor positioning result matches the target floor in the request sent by the robot 11 and the elevator door 16 is in the door-open state, the positioning device 10 sends a reminding signal to the robot 11 through the wireless communication device 105 in the positioning device so that the robot 11 enters the elevator according to the reminding signal. The floor positioning result is that the identification result of the image of the display panel 14 is the target floor, and the identification result of the key image is that the key corresponding to the target floor is in a light-off state. The above-described positioning apparatus 10 can more safely and reliably obtain the current floor state of the elevator by simultaneously monitoring and recognizing the current floor information displayed on the elevator floor display panel 14, the backlight of the target floor of the robot 11 on the elevator keypad, and the state of the elevator door 16, and transmit the current elevator floor state to the robot 11 waiting in the waiting area or in the elevator car 12 through the wireless communication unit.
In the embodiment of the application, the image information is acquired through the image acquisition device arranged in the elevator, so that the problem of overlarge deviation of the image information acquired at different times is solved, and the display panel image and the floor key image are extracted from the image information so as to eliminate the interference image in the image information and improve the accuracy of image identification. The image recognition is respectively carried out on the display panel image and the key image so as to achieve the double detection of the current floor, after the recognition results of the display panel image and the key image are determined, the floor positioning result is determined according to the recognition results of the display panel image and the key image, so that the accuracy of the elevator floor positioning result is further improved, and after the positioning request sent by the robot is obtained, the corresponding floor positioning result is sent to the robot according to the positioning request, so that the robot executes the corresponding operation according to the elevator floor positioning result with higher accuracy.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the above-mentioned elevator floor positioning method, fig. 6 is a schematic structural diagram of an elevator floor positioning device in an embodiment of the present application, and as shown in fig. 6, the elevator floor positioning device may include:
the extraction module 601 is configured to obtain image information through an image acquisition device built in the elevator, and extract a display panel image and a floor button image from the image information.
The identifying module 602 is configured to perform image identification on the display panel image and the key image, respectively, and determine an identification result of the display panel image and the key image.
A result determining module 603, configured to determine a floor positioning result according to the recognition result of the display panel image and the key image;
the obtaining module 604 is configured to obtain a positioning request sent by the robot, and send a corresponding floor positioning result to the robot according to the positioning request.
In one embodiment, the extracting module 601 may include:
the first extraction unit is used for extracting a feature pattern with a preset specification in the image information.
The cutting unit is used for cutting the image information according to the characteristic graph and determining a display panel image and a key image; the display panel displays the floor number of the elevator, and the key image displays the on-off state of the background light of the key corresponding to the selected floor.
In one embodiment, the extracting module 601 may include:
and the second extraction unit is used for extracting the display panel image and the floor key image from the image information according to the preset absolute coordinate value.
In one embodiment, the identifying module 602 may include:
and the first calculating unit is used for calculating the gray difference value of each pixel point between the display panel image and each preset floor panel image.
And the first identification unit is used for taking the floor number corresponding to the floor panel image with the gray difference value of the pixel points of the preset number within the preset range as the identification result of the display panel image.
And the second calculating unit is used for calculating the gray difference value of each pixel point between the key image and each preset floor key image.
And the second identification unit is used for taking the floor marking information corresponding to the floor key image with the gray difference value of the pixel points of the preset number within the preset range as the identification result of the key image.
In one embodiment, the identifying module 602 may include:
the training unit is used for respectively inputting the display panel image and the key image into a preset network model for image recognition and determining recognition results of the display panel image and the key image; the network model is obtained by training each floor image in the elevator.
In one embodiment, the elevator floor positioning device may further include:
and the first feature extraction module is used for extracting features of the image information and determining an elevator scene image in the image information.
And the result sending module is used for sending the floor positioning result to the robot when the robot exists in the elevator according to the elevator scene image.
In one embodiment, the elevator floor positioning device may further include:
and the second feature extraction module is used for performing feature extraction on the image information and determining an elevator car door image in the image information.
And the first image recognition module is used for carrying out image recognition on the elevator car door image and determining the recognition result of the elevator car door image.
And the state determining module is used for determining the elevator car door state according to the identification result of the elevator car door image.
And the first execution module is used for sending the elevator car door state to the robot according to the positioning request so that the robot executes corresponding operation according to the floor positioning result and the elevator car door state.
In one embodiment, the elevator floor positioning device may further include:
and the second execution module is used for sending a reminding signal when detecting that the floor positioning result accords with the target floor in the positioning request and the elevator car door state is the car door opening state, so that the robot executes corresponding operation according to the reminding signal.
In the embodiment of the application, the image information is acquired through the image acquisition device arranged in the elevator, so that the problem of overlarge deviation of the image information acquired at different times is solved, and the display panel image and the floor key image are extracted from the image information so as to eliminate the interference image in the image information and improve the accuracy of image identification. The image recognition is respectively carried out on the display panel image and the key image so as to achieve the double detection of the current floor, after the recognition results of the display panel image and the key image are determined, the floor positioning result is determined according to the recognition results of the display panel image and the key image, so that the accuracy of the elevator floor positioning result is further improved, and after the positioning request sent by the robot is obtained, the corresponding floor positioning result is sent to the robot according to the positioning request, so that the robot executes the corresponding operation according to the elevator floor positioning result with higher accuracy.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the module described above may refer to corresponding processes in the foregoing system embodiments and method embodiments, and are not described herein again.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. For convenience of explanation, only portions related to the embodiments of the present application are shown.
As shown in fig. 7, the terminal device 7 of this embodiment includes: at least one processor 700 (only one shown in fig. 7), a memory 701 connected to the processor 700, and a computer program 702, such as an elevator floor location program, stored in the memory 701 and executable on the at least one processor 700. The processor 700, when executing the computer program 702, implements the steps of the various elevator floor location method embodiments described above, such as steps S101-S104 shown in fig. 1. Alternatively, the processor 700 implements the functions of the modules in the device embodiments, such as the functions of the modules 601 to 604 shown in fig. 6, when executing the computer program 702.
Illustratively, the computer program 702 may be divided into one or more modules, which are stored in the memory 701 and executed by the processor 700 to complete the present application. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 702 in the terminal device 7. For example, the computer program 702 may be divided into an extraction module 601, an identification module 602, a result determination module 603, and an acquisition module 604, and the specific functions of the modules are as follows:
the extraction module 601 is used for acquiring image information through an image acquisition device arranged in the elevator and extracting a display panel image and a floor key image from the image information;
the identification module 602 is configured to perform image identification on the display panel image and the key image, and determine an identification result of the display panel image and the key image;
a result determining module 603, configured to determine a floor positioning result according to the recognition result of the display panel image and the key image;
the obtaining module 604 is configured to obtain a positioning request sent by the robot, and send a corresponding floor positioning result to the robot according to the positioning request.
The terminal device 7 may include, but is not limited to, a processor 700 and a memory 701. It will be understood by those skilled in the art that fig. 7 is only an example of the terminal device 7, and does not constitute a limitation to the terminal device 7, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, a bus, etc.
The Processor 700 may be a Central Processing Unit (CPU), and the Processor 700 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 701 may be an internal storage unit of the terminal device 7 in some embodiments, for example, a hard disk or a memory of the terminal device 7. In other embodiments, the memory 701 may also be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 7. Further, the memory 701 may include both an internal storage unit of the terminal device 7 and an external storage device. The memory 701 is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as a program code of the computer program. The above-described memory 701 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (11)

1. An elevator floor location method, comprising:
acquiring image information through an image acquisition device arranged in an elevator, and extracting a display panel image and a floor key image from the image information;
respectively carrying out image recognition on the display panel image and the key image, and determining recognition results of the display panel image and the key image;
determining a floor positioning result according to the display panel image and the identification result of the key image;
and acquiring a positioning request sent by the robot, and sending a corresponding floor positioning result to the robot according to the positioning request.
2. The elevator floor location method of claim 1 wherein said extracting a display panel image and a floor key image from said image information comprises:
extracting a characteristic graph with a preset specification in the image information;
cutting the image information according to the characteristic graph, and determining the display panel image and the key image; the display panel displays the floor number of the elevator, and the key image displays the on-off state of the background light of the key corresponding to the selected floor.
3. The elevator floor location method of claim 1 wherein said extracting a display panel image and a floor key image from said image information comprises:
and extracting a display panel image and a floor key image from the image information according to a preset absolute coordinate value.
4. The elevator floor positioning method of claim 1, wherein said performing image recognition on said display panel image and said key image, respectively, and determining the recognition results of said display panel image and said key image, comprises:
calculating the gray difference value of each pixel point between the display panel image and each preset floor panel image;
taking the floor number corresponding to the floor panel image, of which the gray level difference value of the pixel points with the preset number is within a preset range, as the identification result of the display panel image;
calculating the gray level difference value of each pixel point between the key image and each preset floor key image;
and taking floor marking information corresponding to the floor key image with the gray level difference value of the pixel points of the preset number within a preset range as the identification result of the key image.
5. The elevator floor positioning method of claim 1, wherein said performing image recognition on said display panel image and said key image, respectively, and determining the recognition results of said display panel image and said key image, comprises:
respectively inputting the display panel image and the key image into a preset network model for image recognition, and determining recognition results of the display panel image and the key image; the network model is trained from images of each floor in the elevator.
6. The elevator floor location method according to any one of claims 1 to 5, after determining a floor location result from the recognition results of the display panel image and the key image, comprising:
performing feature extraction on the image information, and determining an elevator scene image in the image information;
and when the robot exists in the elevator according to the elevator scene image, sending the floor positioning result to the robot.
7. The elevator floor location method of any of claims 1 to 5, after sending a corresponding floor location result to the robot according to the location request, comprising:
extracting the characteristics of the image information, and determining an elevator car door image in the image information;
carrying out image recognition on the elevator car door image, and determining a recognition result of the elevator car door image;
determining the elevator car door state according to the recognition result of the elevator car door image;
and sending the elevator car door state to the robot according to the positioning request, so that the robot executes corresponding operation according to the floor positioning result and the elevator car door state.
8. The elevator floor location method of claim 7, after obtaining the location request sent by the robot, comprising:
and sending a reminding signal when the floor positioning result is detected to accord with the target floor in the positioning request and the elevator car door state is the car door opening state, so that the robot executes corresponding operation according to the reminding signal.
9. An elevator floor positioning device, comprising:
the extraction module is used for acquiring image information through an image acquisition device arranged in the elevator and extracting a display panel image and a floor key image from the image information;
the identification module is used for respectively carrying out image identification on the display panel image and the key image and determining the identification results of the display panel image and the key image;
the result determining module is used for determining a floor positioning result according to the display panel image and the identification result of the key image;
and the acquisition module is used for acquiring the positioning request sent by the robot and sending the corresponding floor positioning result to the robot according to the positioning request.
10. Terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor when executing the computer program carries out the steps of an elevator floor positioning method according to any of claims 1 to 8.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of a method of locating an elevator floor according to any one of claims 1 to 8.
CN202111300521.5A 2021-11-04 2021-11-04 Elevator floor positioning method and device, terminal equipment and storage medium Pending CN114022767A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111300521.5A CN114022767A (en) 2021-11-04 2021-11-04 Elevator floor positioning method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111300521.5A CN114022767A (en) 2021-11-04 2021-11-04 Elevator floor positioning method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114022767A true CN114022767A (en) 2022-02-08

Family

ID=80061276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111300521.5A Pending CN114022767A (en) 2021-11-04 2021-11-04 Elevator floor positioning method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114022767A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419471A (en) * 2022-03-29 2022-04-29 北京云迹科技股份有限公司 Floor identification method and device, electronic equipment and storage medium
CN114735561A (en) * 2022-04-26 2022-07-12 北京三快在线科技有限公司 Method and device for detecting elevator stop floor
CN114735559A (en) * 2022-04-19 2022-07-12 中国人民解放军63811部队 Elevator monitoring system and method based on image recognition

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107021390A (en) * 2017-04-18 2017-08-08 上海木爷机器人技术有限公司 The control method and system of elevator are taken by robot
CN109895105A (en) * 2017-12-11 2019-06-18 拉扎斯网络科技(上海)有限公司 A kind of intelligent apparatus
CN209480995U (en) * 2018-11-30 2019-10-11 深圳市普渡科技有限公司 The system of robot autonomous disengaging elevator
CN110610191A (en) * 2019-08-05 2019-12-24 深圳优地科技有限公司 Elevator floor identification method and device and terminal equipment
CN110697526A (en) * 2019-09-04 2020-01-17 深圳优地科技有限公司 Elevator floor detection method and device
CN111039113A (en) * 2019-12-31 2020-04-21 北京猎户星空科技有限公司 Elevator running state determining method, device, equipment and medium
CN111999721A (en) * 2020-08-21 2020-11-27 深圳优地科技有限公司 Floor recognition method, device, system and computer readable storage medium
CN112299173A (en) * 2020-10-31 2021-02-02 成都新潮传媒集团有限公司 Elevator floor display method and device and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107021390A (en) * 2017-04-18 2017-08-08 上海木爷机器人技术有限公司 The control method and system of elevator are taken by robot
CN109895105A (en) * 2017-12-11 2019-06-18 拉扎斯网络科技(上海)有限公司 A kind of intelligent apparatus
CN209480995U (en) * 2018-11-30 2019-10-11 深圳市普渡科技有限公司 The system of robot autonomous disengaging elevator
CN110610191A (en) * 2019-08-05 2019-12-24 深圳优地科技有限公司 Elevator floor identification method and device and terminal equipment
CN110697526A (en) * 2019-09-04 2020-01-17 深圳优地科技有限公司 Elevator floor detection method and device
CN111039113A (en) * 2019-12-31 2020-04-21 北京猎户星空科技有限公司 Elevator running state determining method, device, equipment and medium
CN111999721A (en) * 2020-08-21 2020-11-27 深圳优地科技有限公司 Floor recognition method, device, system and computer readable storage medium
CN112299173A (en) * 2020-10-31 2021-02-02 成都新潮传媒集团有限公司 Elevator floor display method and device and computer readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419471A (en) * 2022-03-29 2022-04-29 北京云迹科技股份有限公司 Floor identification method and device, electronic equipment and storage medium
CN114735559A (en) * 2022-04-19 2022-07-12 中国人民解放军63811部队 Elevator monitoring system and method based on image recognition
CN114735559B (en) * 2022-04-19 2023-10-31 中国人民解放军63811部队 Elevator monitoring system and method based on image recognition
CN114735561A (en) * 2022-04-26 2022-07-12 北京三快在线科技有限公司 Method and device for detecting elevator stop floor

Similar Documents

Publication Publication Date Title
CN114022767A (en) Elevator floor positioning method and device, terminal equipment and storage medium
CN110855976B (en) Camera abnormity detection method and device and terminal equipment
CN106845890B (en) Storage monitoring method and device based on video monitoring
KR101824446B1 (en) A reinforcement learning based vehicle number recognition method for CCTV
KR101935010B1 (en) Apparatus and method for recognizing license plate of car based on image
CN112070053B (en) Background image self-updating method, device, equipment and storage medium
JP5063567B2 (en) Moving object tracking device
CN114187561A (en) Abnormal behavior identification method and device, terminal equipment and storage medium
CN111460917B (en) Airport abnormal behavior detection system and method based on multi-mode information fusion
CN111291749A (en) Gesture recognition method and device and robot
CN107610260B (en) Intelligent attendance system and attendance method based on machine vision
KR101236266B1 (en) Manage system of parking space information
CN109257594A (en) TV delivery detection method, device and computer readable storage medium
CN112292847A (en) Image processing apparatus, mobile apparatus, method, and program
CN112232130B (en) Offline detection method, industrial personal computer, ETC antenna device and system
US20240046647A1 (en) Method and device for detecting obstacles, and computer storage medium
CN115471804A (en) Marked data quality inspection method and device, storage medium and electronic equipment
CN114067283A (en) Violation identification method and device, electronic equipment and storage medium
CN109117035B (en) Method for hiding floating icon, terminal recovery system and storage medium
CN113052058A (en) Vehicle-mounted passenger flow statistical method and device and storage medium
CN111639640A (en) License plate recognition method, device and equipment based on artificial intelligence
CN107071231A (en) Image change recognition methods and device
CN113449545A (en) Data processing method, device, storage medium and processor
KR102151912B1 (en) Video surveillance system and method performed thereat
KR20190101577A (en) real-time status information extraction method for train using image processing and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination