CN110718294B - Intelligent medical guide robot and intelligent medical guide method - Google Patents

Intelligent medical guide robot and intelligent medical guide method Download PDF

Info

Publication number
CN110718294B
CN110718294B CN201910803483.1A CN201910803483A CN110718294B CN 110718294 B CN110718294 B CN 110718294B CN 201910803483 A CN201910803483 A CN 201910803483A CN 110718294 B CN110718294 B CN 110718294B
Authority
CN
China
Prior art keywords
image
hospital
information
functional areas
characters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910803483.1A
Other languages
Chinese (zh)
Other versions
CN110718294A (en
Inventor
颜景春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisound Intelligent Technology Co Ltd
Original Assignee
Unisound Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisound Intelligent Technology Co Ltd filed Critical Unisound Intelligent Technology Co Ltd
Priority to CN201910803483.1A priority Critical patent/CN110718294B/en
Publication of CN110718294A publication Critical patent/CN110718294A/en
Application granted granted Critical
Publication of CN110718294B publication Critical patent/CN110718294B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5846Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

The invention provides an intelligent medical guiding robot and an intelligent medical guiding method, wherein the intelligent medical guiding robot and the intelligent medical guiding method update the setting states of different functional areas in a hospital in real time in a real-time image acquisition mode, and can automatically extract indication information of the corresponding functional areas from the acquired images, so that the situation that the setting change of the functional areas in the hospital needs to be manually re-acquired and acquired is eliminated, the labor cost in the medical guiding process is effectively reduced, the operation error and the work delay possibly caused by manual intervention operation can be avoided, meanwhile, the later maintenance is conveniently and rapidly and accurately carried out, and the intelligent medical guiding robot and the intelligent medical guiding method have good applicability to hospital structures in different occasions.

Description

Intelligent medical guide robot and intelligent medical guide method
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an intelligent medical guiding robot and an intelligent medical guiding method.
Background
At present, with the increasing of the flow of people in hospitals, in order to enable people to be accurately positioned inside the hospitals, guide maps such as hospital department distribution maps or hospital fire evacuation maps and the like are usually placed at conspicuous positions in hospitals to provide guidance for people, but the distribution quantity and the distribution positions of the guide maps are limited, which cannot meet the requirements of all people, and the guide maps may have the problem of unclear guidance, so that the situation of guidance with wrong indication is generated. In order to further improve the intelligentization degree of positioning and guiding in hospitals, many hospitals are additionally provided with guiding devices with functions of dynamic text broadcasting guiding or voice broadcasting guiding and the like, but the guiding devices are required to be manually debugged and assembled before formal work, the manual debugging and assembly generally has different defects of high error rate, low efficiency, low speed, long time consumption, large human resource consumption and the like, the guiding devices are required to be regularly maintained and maintained after working for a period of time, and the adaptability adjustment is required to be carried out according to the change of internal mechanisms of the hospitals, so that the hospitals are required to additionally configure special manpower and material resources to manage the guiding devices, and the workload of the hospitals is increased inevitably. Therefore, an intelligent medical guiding robot and an intelligent medical guiding method which are less in manual intervention, convenient and fast to assemble, simple and convenient to maintain in the later period and high in working efficiency are urgently needed in the prior art.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an intelligent medical guiding robot and an intelligent medical guiding method, wherein the intelligent medical guiding robot comprises an image acquisition module, a key characteristic information generation module, a matching relation library generation module and a voice broadcasting module; the image acquisition module is used for acquiring a plurality of images related to different functional areas of a hospital; the key characteristic information generating module is used for acquiring key characteristic information corresponding to each of the plurality of images; the matching relation library generating module is used for generating a matching relation library between different functional areas of the hospital and corresponding area information thereof according to the key characteristic information corresponding to each of the plurality of images; the voice broadcasting module is used for carrying out voice instruction broadcasting operation on different functional areas of the hospital according to the matching relation library; accordingly, the intelligent medical guiding method performs the same medical guiding operation through the intelligent medical guiding robot. Therefore, the intelligent medical guide robot and the intelligent medical guide method are different from the hospital guide device and the method in the prior art, which are only limited to positioning and guiding operations in a static image or voice broadcasting mode, the intelligent medical guide robot and the intelligent medical guide method update the setting states of different functional areas in the hospital in real time in a real-time image acquisition mode, and can also automatically extract the indication information of the corresponding functional areas from the acquired images, so that the situation that the setting change of the functional areas in the hospital needs to be manually re-acquired and acquired is omitted, and the labor cost in the medical guide process is effectively reduced; secondly, the intelligent medical guiding robot and the intelligent medical guiding method also determine the inherent position attributes of different functional areas by constructing a matching relation library, so that the manual intervention operation related to the positioning and confirmation of the different functional areas can be reduced, further the operation error and the work delay possibly caused by the manual intervention operation are avoided, and meanwhile, the later maintenance is conveniently and quickly and accurately carried out; in addition, the intelligent medical guiding robot and the intelligent medical guiding method also guide and prompt in a voice indication playing mode, so that the efficiency and the effectiveness of position guiding can be improved; finally, the intelligent medical guiding robot and the intelligent medical guiding method are also used in hospital institutions in different occasions, and have good applicability to different environments.
The invention provides an intelligent medical guiding robot, which is characterized in that:
the intelligent medical guiding robot comprises an image acquisition module, a key feature information generation module, a matching relation library generation module and a voice broadcast module; wherein the content of the first and second substances,
the image acquisition module is used for acquiring a plurality of images related to different functional areas of a hospital;
the key characteristic information generating module is used for acquiring key characteristic information corresponding to each of the plurality of images;
the matching relation library generating module is used for generating a matching relation library between different functional areas of the hospital and corresponding area information thereof according to the key feature information corresponding to each of the plurality of images;
the voice broadcasting module is used for carrying out voice instruction broadcasting operation on different functional areas of the hospital according to the matching relation library;
further, the image acquisition module comprises a hospital inherent information determining submodule, a distribution map generating submodule, an image acquisition mode determining submodule and a shooting submodule; wherein the content of the first and second substances,
the hospital inherent information determining submodule is used for acquiring position distribution state information and/or function scene information of different function areas of the hospital in advance;
the distribution map generation submodule is used for forming distribution maps of different functional areas of the hospital according to the position distribution state information and/or the functional scene information;
the image acquisition mode determining submodule is used for determining an image acquisition mode corresponding to each of different functional areas of the hospital according to the distribution maps of the different functional areas of the hospital;
the shooting submodule is used for shooting a plurality of images corresponding to each of different functional areas of the hospital according to the image acquisition mode;
further, the key feature information generation module comprises an image recognition sub-module and a natural language understanding sub-module; wherein the content of the first and second substances,
the image recognition submodule is used for carrying out image recognition processing on each of the plurality of images through a preset image recognition neural network model so as to obtain the text content and/or symbol content of each image;
the natural language understanding submodule is used for carrying out understanding and identifying processing on the character content and/or symbol content included in each image so as to obtain the key characteristic information;
further, the matching relation library generating module comprises a character translation sub-module, a correction sub-module, a mapping relation building sub-module and a data structure generating sub-module; wherein the content of the first and second substances,
the character translation submodule is used for translating the key characteristic information corresponding to each image into character description contents of at least one language through a preset character translation neural network model;
the correction submodule is used for performing adaptive correction processing on the text description content;
the mapping relation construction sub-module is used for constructing a mapping relation between the text description content corresponding to each image and the functional area corresponding to each image;
the data structure generation submodule is used for generating a matching relation library between different functional areas of the hospital and corresponding area information thereof through a preset data processing neural network model according to the mapping relation;
further, the voice broadcasting module comprises a voice translation sub-module and an audio playing sub-module; the voice translation sub-module is used for translating the text description contents in the matching relation library into voice signals of at least one language through a preset voice translation neural network model;
the audio playing submodule is used for selecting proper voice signals according to a current function area guiding request input from the outside and playing the voice signals through voice indication broadcasting operation.
The invention also provides an intelligent medical guidance method, which is characterized by comprising the following steps:
the method comprises the following steps of (1) acquiring a plurality of images related to different functional areas of a hospital, and acquiring key characteristic information corresponding to each of the plurality of images;
step (2), according to the key feature information corresponding to each of the plurality of images, generating a matching relation library between different functional areas of the hospital and the corresponding area information;
step (3), according to the matching relation library, carrying out voice indication broadcasting operation on different functional areas of the hospital;
further, in the step (1), acquiring a plurality of images of different functional areas of the hospital, and acquiring key feature information corresponding to each of the plurality of images specifically includes,
the method comprises the steps that (101) position distribution state information and/or function scene information of different functional areas of the hospital are obtained in advance, and therefore distribution maps of the different functional areas of the hospital are formed;
step (102), determining an image acquisition mode corresponding to each of different functional areas of the hospital according to the distribution maps of the different functional areas of the hospital, and acquiring the plurality of images;
step (103), each of the plurality of images is subjected to adaptive image analysis processing, so that character information and/or symbol information contained in each image is obtained, and the key feature information is obtained;
further, in the step (101), acquiring the position distribution status information and/or the function scene information of different functional areas of the hospital in advance, so as to form a distribution map of different functional areas of the hospital specifically includes,
a step (1011) of obtaining in advance at least one of floor information, orientation information, and relative distance information with respect to an entrance/exit where each of the different functional areas of the hospital is located, as the position distribution state information;
step (1012), pre-acquiring corresponding use information of each of different functional areas of the hospital as the functional scene information;
a step (1013) of performing label matching processing on the position distribution state information and the functional scene information with respect to the building structure of the hospital, and generating a three-dimensional distribution map having a three-dimensional distribution label state with respect to different functional areas of the hospital as the distribution map;
alternatively, the first and second electrodes may be,
in the step (102), according to the distribution map of different functional areas of the hospital, determining an image acquisition mode corresponding to each of the different functional areas of the hospital, so as to acquire the plurality of images,
a step (1021) of obtaining at least one of area spatial size and/or area illumination information for each of the different functional areas from the distribution map, thereby determining an image capturing action condition for each functional area;
step (1022), according to the image capturing action condition, determining at least one of a shooting focal length, an exposure time and a shooting angle of view corresponding to each functional area image capturing action, so as to determine the image capturing mode;
step (1023), according to the respective image shooting mode of each functional area, carrying out acquisition operation of a plurality of images on the direction board and the area scene of each functional area;
alternatively, the first and second electrodes may be,
in the step (103), each of the plurality of images is subjected to adaptive image analysis processing, so as to obtain text information and/or symbol information contained in each image, thereby obtaining the key feature information specifically including,
step (1031), performing image recognition processing on each of the plurality of images through a preset image recognition neural network model so as to obtain the text content and/or the symbol content of each image, wherein the image recognition processing comprises,
s1, converting the image to be identified into a gray image, and obtaining a binary image corresponding to the gray image according to the maximum brightness value and the minimum brightness value corresponding to the gray image;
s2, obtaining a discrete black point distribution state corresponding to the binary image, and carrying out corrosion expansion processing on the binary image according to the discrete black point distribution state;
s3, respectively carrying out X-direction and Y-direction area selection processing on the image subjected to the erosion and expansion processing, wherein the X-direction and Y-direction area selection processing is realized by a mode of expanding and scanning from the middle to two sides;
s4, cutting the characters in the corresponding image area according to the area selection processing results in the X direction and the Y direction, and placing the characters obtained after cutting in a one-dimensional array;
s5, creating a corresponding character set according to the characters in the one-dimensional array, and performing character matching processing on the character set by adopting a preset template matching algorithm, wherein the preset template matching algorithm comprises the step of subtracting the characters in the character set from preset template characters, and if the subtraction result meets a preset error condition, indicating that the characters in the character set are matched with the preset template characters;
s6, writing the characters which are determined in the step S5 and are matched with the preset template characters into texts as the word contents and/or the symbol contents for the processing of the following step (1032);
step (1032), the character content and/or symbol content included in each image is subjected to understanding and identifying processing of natural language, so that the key characteristic information is obtained;
further, in the step (2), generating a matching relation library between different functional areas of the hospital and corresponding area information thereof according to the key feature information corresponding to each of the plurality of images specifically includes,
step (201), translating the key characteristic information corresponding to each image into at least one language text description content by a preset text translation neural network model, and performing adaptive correction processing on the text description content;
step (202), constructing a mapping relation between the text description content corresponding to each image and the functional area corresponding to each image;
step (203), according to the mapping relation, generating a matching relation library between different functional areas of the hospital and corresponding area information thereof through a preset data processing neural network model so as to represent a one-to-one corresponding data structure relation between the different functional areas of the hospital and the corresponding area information thereof;
further, in the step (3), the performing of the voice instruction broadcasting operation on the different functional areas of the hospital according to the matching relationship library specifically includes,
step (301), extracting the position of each functional area and the character description content of each functional area guide from the matching relation library;
step (302), translating the text description content into a voice signal of at least one language through a preset voice translation neural network model;
and (303) selecting a proper voice signal according to a function area guide request input from the outside at present, and playing the voice signal through the voice indication broadcasting operation.
Compared with the prior art, the intelligent medical guiding robot and the intelligent medical guiding method are different from the hospital guiding device and the method in the prior art, and are only limited to positioning and guiding operations in a static image or voice broadcasting mode, the intelligent medical guiding robot and the intelligent medical guiding method update the setting states of different functional areas in the hospital in real time in a real-time image acquisition mode, and can automatically extract the indication information of the corresponding functional areas from the acquired images, so that the situation that the setting change of the functional areas in the hospital needs to be carried out by manually re-acquiring and acquiring the setting information can be omitted, and the labor cost in the medical guiding process can be effectively reduced; secondly, the intelligent medical guiding robot and the intelligent medical guiding method also determine the inherent position attributes of different functional areas by constructing a matching relation library, so that the manual intervention operation related to the positioning and confirmation of the different functional areas can be reduced, further the operation error and the work delay possibly caused by the manual intervention operation are avoided, and meanwhile, the later maintenance is conveniently and quickly and accurately carried out; in addition, the intelligent medical guiding robot and the intelligent medical guiding method also guide and prompt in a voice indication playing mode, so that the efficiency and the effectiveness of position guiding can be improved; finally, the intelligent medical guiding robot and the intelligent medical guiding method are also used in hospital institutions in different occasions, and have good applicability to different environments.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an intelligent medical guidance robot provided by the invention.
Fig. 2 is a schematic flow chart of an intelligent medical guidance method provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic structural diagram of an intelligent medical guidance robot according to an embodiment of the present invention. The intelligent medical guide robot comprises an image acquisition module, a key feature information generation module, a matching relation library generation module and a voice broadcast module. Wherein the content of the first and second substances,
the image acquisition module is used for acquiring a plurality of images related to different functional areas of a hospital;
the key characteristic information generating module is used for acquiring key characteristic information corresponding to each of the plurality of images;
the matching relation library generating module is used for generating a matching relation library between different functional areas of the hospital and corresponding area information thereof according to the key characteristic information corresponding to each of the plurality of images;
the voice broadcast module is used for carrying out voice indication broadcast operation about different functional areas of the hospital according to the matching relation library.
Preferably, the image acquisition module comprises a hospital inherent information determining submodule, a distribution map generating submodule, an image acquisition mode determining submodule and a shooting submodule;
preferably, the hospital intrinsic information determining submodule is used for acquiring position distribution state information and/or function scene information of different function areas of the hospital in advance;
preferably, the distribution map generation sub-module is used for forming distribution maps of different functional areas of the hospital according to the position distribution state information and/or the functional scene information;
preferably, the image acquisition mode determining submodule is configured to determine, according to distribution maps of different functional areas of the hospital, an image acquisition mode corresponding to each of the different functional areas of the hospital;
preferably, the photographing sub-module is configured to photograph a plurality of images corresponding to each of different functional areas of the hospital according to the image acquisition mode.
Preferably, the key feature information generating module comprises an image recognition sub-module and a natural language understanding sub-module;
preferably, the image recognition sub-module is configured to perform image recognition processing on each of the plurality of images through a preset image recognition neural network model, so as to obtain text content and/or symbol content included in each image;
preferably, the natural language understanding sub-module is configured to perform natural language understanding recognition processing on the text content and/or the symbolic content included in each image, so as to obtain the key feature information.
Preferably, the matching relation library generating module comprises a text translation sub-module, a correction sub-module, a mapping relation constructing sub-module and a data structure generating sub-module;
preferably, the text translation sub-module is configured to translate the key feature information corresponding to each image into text description content of at least one language through a preset text translation neural network model;
preferably, the modification submodule is configured to perform adaptive modification processing on the text description content;
preferably, the mapping relation constructing sub-module is configured to construct a mapping relation between the text description content corresponding to each image and the functional area corresponding to each image;
preferably, the data structure generation submodule is configured to generate a matching relationship library between different functional areas of the hospital and corresponding area information thereof through a preset data processing neural network model according to the mapping relationship.
Preferably, the voice broadcasting module comprises a voice translation sub-module and an audio playing sub-module;
preferably, the speech translation sub-module is configured to translate the text description content in the matching relationship library into a speech signal of at least one language through a preset speech translation neural network model;
preferably, the audio playing sub-module is configured to select a suitable voice signal according to a currently externally input function area guidance request, and play the voice signal through the voice instruction broadcast operation.
Fig. 2 is a schematic flow chart of an intelligent medical guiding method according to an embodiment of the present invention. The intelligent medical guiding method comprises the following steps:
the method comprises the following steps of (1) acquiring a plurality of images related to different functional areas of a hospital, and acquiring key feature information corresponding to each of the plurality of images.
Preferably, in step (1), acquiring a plurality of images about different functional areas of the hospital, and acquiring key feature information about each of the plurality of images includes,
step (101), position distribution state information and/or function scene information of different functional areas of the hospital are/is obtained in advance, and therefore distribution maps of the different functional areas of the hospital are formed;
step (102), determining an image acquisition mode corresponding to each of different functional areas of the hospital according to distribution maps of the different functional areas of the hospital, and acquiring a plurality of images;
and (103) carrying out adaptive image analysis processing on each of the plurality of images so as to obtain character information and/or symbol information contained in each image, thereby obtaining the key feature information.
Preferably, in the step (101), the obtaining the location distribution status information and/or the functional scenario information of different functional areas of the hospital in advance to form the distribution map of different functional areas of the hospital specifically includes,
a step (1011) of obtaining in advance at least one of floor information, orientation information, and relative distance information with respect to an entrance/exit where each of the different functional areas of the hospital is located, as the position distribution state information;
step (1012), pre-acquiring corresponding use information of each of different functional areas of the hospital, and using the use information as the functional scene information;
and (1013) performing label matching processing on the position distribution state information and the function scene information about the building structure of the hospital, and generating a three-dimensional distribution map having a three-dimensional distribution label state about different function areas of the hospital as the distribution map.
Preferably, in the step (102), the image acquisition mode corresponding to each of the different functional areas of the hospital is determined according to the distribution map of the different functional areas of the hospital, so as to acquire the plurality of images specifically including,
a step (1021) of obtaining at least one of area spatial size and/or area illumination information for each of the different functional areas from the distribution map, thereby determining an image capturing action condition for each functional area;
step (1022), according to the image capturing action condition, determining at least one of a shooting focal length, an exposure time and a shooting angle of view corresponding to each functional area image capturing action, so as to determine the image capturing mode;
and (1023) acquiring a plurality of images of the direction board and the area scene of each functional area according to the respective image shooting mode of each functional area.
Preferably, in the step (103), each of the plurality of images is subjected to adaptive image analysis processing, so as to obtain text information and/or symbol information included in each image, thereby obtaining the key feature information specifically including,
step (1031), through presetting an image recognition neural network model, carrying out image recognition processing on each of the plurality of images so as to obtain character content and/or symbol content which each image respectively comprises;
step (1032), the character content and/or symbol content included in each image is subjected to understanding and identifying processing of natural language, so as to obtain the key characteristic information;
preferably, the image recognition process includes,
s1, converting the image to be identified into a gray image, and obtaining a binary image corresponding to the gray image according to the maximum brightness value and the minimum brightness value corresponding to the gray image;
s2, acquiring a discrete black point distribution state corresponding to the binary image, and carrying out corrosion expansion processing on the binary image according to the discrete black point distribution state;
s3, respectively carrying out X-direction and Y-direction area selection processing on the image subjected to the erosion and expansion processing, wherein the X-direction and Y-direction area selection processing is realized by a mode of expanding and scanning from the middle to two sides;
s4, according to the result of the region selection processing in the X direction and the Y direction, cutting the characters in the corresponding image region, and placing the characters obtained after the cutting processing in a one-dimensional array;
s5, creating a corresponding character set according to the characters in the one-dimensional array, and performing character matching processing on the character set by using a preset template matching algorithm, where the preset template matching algorithm includes subtracting the characters in the character set from preset template characters, and if the subtraction result meets a preset error condition, indicating that the characters in the character set are matched with the preset template characters, for example, assuming that the characters in the character set are matched with the preset template characters
Figure BDA0002182976490000121
One of the preset template characters
Figure BDA0002182976490000122
Another preset template character
Figure BDA0002182976490000123
The following subtraction processing Σ | a-T is performed1|=8,Σ|A-T22, visible character A and preset template character T2The subtraction error between is less than the character A and the pre-valueSet template character T1The difference between the two, so that the character A and the preset template character T can be considered2More similarly, thereby determining the preset template character T2Is the character a;
s6, writing the characters which are determined in the step S5 and matched with the preset template characters into the text as the literal content and/or the symbolic content for the processing of the step (1032).
And (2) generating a matching relation library between different functional areas of the hospital and the corresponding area information thereof according to the key characteristic information corresponding to each of the plurality of images.
Preferably, in the step (2), the generating of the matching relation library between different functional areas of the hospital and the corresponding area information thereof according to the corresponding key feature information of each of the plurality of images specifically includes,
step (201), translating the key characteristic information corresponding to each image into at least one language text description content by a preset text translation neural network model, and performing adaptive correction processing on the text description content;
step (202), constructing a mapping relation between the text description content corresponding to each image and the functional area corresponding to each image;
and (203) generating a matching relation library between different functional areas of the hospital and corresponding area information thereof through a preset data processing neural network model according to the mapping relation so as to represent the one-to-one corresponding data structure relation between the different functional areas of the hospital and the corresponding area information thereof.
And (3) carrying out voice instruction broadcasting operation on different functional areas of the hospital according to the matching relation library.
Preferably, in the step (3), the performing of the voice instruction broadcasting operation on the different functional areas of the hospital according to the matching relation library specifically includes,
step (301), extracting the position of each functional area and the character description content of each functional area guide from the matching relation library;
step (302), translating the text description content into a voice signal of at least one language through a preset voice translation neural network model;
and (303) selecting a proper voice signal according to a function area guide request input from the outside at present, and playing the voice signal through the voice indication broadcasting operation.
It can be known from the content of the above embodiment that the intelligent medical guiding robot and the intelligent medical guiding method are different from the hospital guiding device and method in the prior art, which are only limited to positioning and guiding operations in a static image or voice broadcasting manner, and update the setting states of different functional areas in the hospital in real time in a real-time image acquisition manner, and can also automatically extract the indication information of the corresponding functional area from the acquired image, so that the need of manually re-acquiring and acquiring the setting information when the functional area is set and changed in the hospital can be eliminated, thereby effectively reducing the labor cost in the medical guiding process; secondly, the intelligent medical guiding robot and the intelligent medical guiding method also determine the inherent position attributes of different functional areas by constructing a matching relation library, so that the manual intervention operation related to the positioning and confirmation of the different functional areas can be reduced, further the operation error and the work delay possibly caused by the manual intervention operation are avoided, and meanwhile, the later maintenance is conveniently and quickly and accurately carried out; in addition, the intelligent medical guiding robot and the intelligent medical guiding method also guide and prompt in a voice indication playing mode, so that the efficiency and the effectiveness of position guiding can be improved; finally, the intelligent medical guiding robot and the intelligent medical guiding method are also used in hospital institutions in different occasions, and have good applicability to different environments.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (6)

1. The utility model provides a medical robot is led to intelligence which characterized in that:
the intelligent medical guiding robot comprises an image acquisition module, a key feature information generation module, a matching relation library generation module and a voice broadcast module; wherein the content of the first and second substances,
the image acquisition module is used for acquiring a plurality of images related to different functional areas of a hospital;
the key characteristic information generating module is used for acquiring key characteristic information corresponding to each of the plurality of images;
the matching relation library generating module is used for generating a matching relation library between different functional areas of the hospital and corresponding area information thereof according to the key feature information corresponding to each of the plurality of images;
the voice broadcasting module is used for carrying out voice instruction broadcasting operation on different functional areas of the hospital according to the matching relation library;
the image acquisition module comprises a hospital inherent information determining submodule, a distribution map generating submodule, an image acquisition mode determining submodule and a shooting submodule; wherein the content of the first and second substances,
the hospital inherent information determining submodule is used for acquiring position distribution state information and/or function scene information of different function areas of the hospital in advance;
the distribution map generation submodule is used for forming distribution maps of different functional areas of the hospital according to the position distribution state information and/or the functional scene information;
the image acquisition mode determining submodule is used for determining an image acquisition mode corresponding to each of different functional areas of the hospital according to the distribution maps of the different functional areas of the hospital;
the shooting submodule is used for shooting a plurality of images corresponding to each of different functional areas of the hospital according to the image acquisition mode;
the key characteristic information generation module comprises an image recognition sub-module and a natural language understanding sub-module; wherein the content of the first and second substances,
the image recognition submodule is used for carrying out image recognition processing on each of the plurality of images through a preset image recognition neural network model so as to obtain the text content and/or symbol content of each image; the natural language understanding submodule is used for carrying out understanding and identifying processing on the character content and/or symbol content included in each image so as to obtain the key characteristic information;
the image recognition submodule for performing image recognition processing comprises:
s1, converting the image to be identified into a gray image, and obtaining a binary image corresponding to the gray image according to the maximum brightness value and the minimum brightness value corresponding to the gray image;
s2, obtaining a discrete black point distribution state corresponding to the binary image, and carrying out corrosion expansion processing on the binary image according to the discrete black point distribution state;
s3, respectively carrying out X-direction and Y-direction area selection processing on the image subjected to the erosion and expansion processing, wherein the X-direction and Y-direction area selection processing is realized by a mode of expanding and scanning from the middle to two sides;
s4, cutting the characters in the corresponding image area according to the area selection processing results in the X direction and the Y direction, and placing the characters obtained after cutting in a one-dimensional array;
s5, creating a corresponding character set according to the characters in the one-dimensional array, and performing character matching processing on the character set by adopting a preset template matching algorithm, wherein the preset template matching algorithm comprises the step of subtracting the characters in the character set from preset template characters, and if the subtraction result meets a preset error condition, indicating that the characters in the character set are matched with the preset template characters; assuming characters in the character set
Figure FDA0003444817720000021
One of the preset template characters
Figure FDA0003444817720000022
Another preset template character
Figure FDA0003444817720000031
The following subtraction processing Σ | a-T is performed1|=8,Σ|A-T22, visible character A and preset template character T2The subtraction error between the characters is less than the characters A and the preset template characters T1The difference between the two, so that the character A and the preset template character T can be considered2More similarly, thereby determining the preset template character T2Is the character a;
s6, writing the characters which are determined in the step S5 and are matched with the preset template characters into texts as the word contents and/or the symbol contents for the processing of the following step (1032); and S6, writing the characters which are determined in the step S5 and are matched with the preset template characters into a text as the character content and/or the symbol content.
2. The intelligent medical guidance robot of claim 1, wherein:
the matching relation library generating module comprises a character translation submodule, a correction submodule, a mapping relation constructing submodule and a data structure generating submodule; wherein the content of the first and second substances,
the character translation submodule is used for translating the key characteristic information corresponding to each image into character description contents of at least one language through a preset character translation neural network model;
the correction submodule is used for performing adaptive correction processing on the text description content;
the mapping relation construction sub-module is used for constructing a mapping relation between the text description content corresponding to each image and the functional area corresponding to each image;
and the data structure generation submodule is used for generating a matching relation library between different functional areas of the hospital and corresponding area information thereof through a preset data processing neural network model according to the mapping relation.
3. The intelligent medical guidance robot of claim 1, wherein:
the voice broadcasting module comprises a voice translation sub-module and an audio playing sub-module; wherein the content of the first and second substances,
the voice translation sub-module is used for translating the text description contents in the matching relation library into voice signals of at least one language through a preset voice translation neural network model;
the audio playing submodule is used for selecting proper voice signals according to a current function area guiding request input from the outside and playing the voice signals through voice indication broadcasting operation.
4. An intelligent medical guidance method is characterized by comprising the following steps:
the method comprises the following steps of (1) acquiring a plurality of images related to different functional areas of a hospital, and acquiring key characteristic information corresponding to each of the plurality of images;
step (2), according to the key feature information corresponding to each of the plurality of images, generating a matching relation library between different functional areas of the hospital and the corresponding area information;
step (3), according to the matching relation library, carrying out voice indication broadcasting operation on different functional areas of the hospital;
in the step (1), acquiring a plurality of images related to different functional areas of a hospital, and acquiring key feature information corresponding to each of the plurality of images specifically includes,
the method comprises the steps that (101) position distribution state information and/or function scene information of different functional areas of the hospital are obtained in advance, and therefore distribution maps of the different functional areas of the hospital are formed;
step (102), determining an image acquisition mode corresponding to each of different functional areas of the hospital according to the distribution maps of the different functional areas of the hospital, and acquiring the plurality of images;
step (103), each of the plurality of images is subjected to adaptive image analysis processing, so that character information and/or symbol information contained in each image is obtained, and the key feature information is obtained;
in the step (101), the obtaining of the position distribution state information and/or the function scene information of different functional areas of the hospital in advance to form a distribution map of different functional areas of the hospital specifically includes,
a step (1011) of obtaining in advance at least one of floor information, orientation information, and relative distance information with respect to an entrance/exit where each of the different functional areas of the hospital is located, as the position distribution state information;
step (1012), pre-acquiring corresponding use information of each of different functional areas of the hospital as the functional scene information;
a step (1013) of performing label matching processing on the position distribution state information and the functional scene information with respect to the building structure of the hospital, and generating a three-dimensional distribution map having a three-dimensional distribution label state with respect to different functional areas of the hospital as the distribution map; alternatively, the first and second electrodes may be,
in the step (102), according to the distribution map of different functional areas of the hospital, determining an image acquisition mode corresponding to each of the different functional areas of the hospital, so as to acquire the plurality of images,
a step (1021) of obtaining at least one of area spatial size and/or area illumination information for each of the different functional areas from the distribution map, thereby determining an image capturing action condition for each functional area;
step (1022), according to the image capturing action condition, determining at least one of a shooting focal length, an exposure time and a shooting angle of view corresponding to each functional area image capturing action, so as to determine the image capturing mode;
step (1023), according to the respective image shooting mode of each functional area, carrying out acquisition operation of a plurality of images on the direction board and the area scene of each functional area;
alternatively, the first and second electrodes may be,
in the step (103), each of the plurality of images is subjected to adaptive image analysis processing, so as to obtain text information and/or symbol information contained in each image, thereby obtaining the key feature information specifically including,
and (1031) performing image recognition processing on each of the plurality of images through a preset image recognition neural network model to obtain text content and/or symbol content included in each image, wherein the image recognition processing includes:
s1, converting the image to be identified into a gray image, and obtaining a binary image corresponding to the gray image according to the maximum brightness value and the minimum brightness value corresponding to the gray image;
s2, obtaining a discrete black point distribution state corresponding to the binary image, and carrying out corrosion expansion processing on the binary image according to the discrete black point distribution state;
s3, respectively carrying out X-direction and Y-direction area selection processing on the image subjected to the erosion and expansion processing, wherein the X-direction and Y-direction area selection processing is realized by a mode of expanding and scanning from the middle to two sides;
s4, cutting the characters in the corresponding image area according to the area selection processing results in the X direction and the Y direction, and placing the characters obtained after cutting in a one-dimensional array;
s5, creating a corresponding character set according to the characters in the one-dimensional array, and performing character matching processing on the character set by adopting a preset template matching algorithm, wherein the preset template matching algorithm comprises the step of subtracting the characters in the character set from preset template characters, and if the subtraction result conforms to the preset error, the subtraction result is processedIf the condition is poor, indicating that the characters in the character set are matched with the preset template characters; assuming characters in the character set
Figure FDA0003444817720000061
One of the preset template characters
Figure FDA0003444817720000062
Another preset template character
Figure FDA0003444817720000063
The following subtraction processing Σ | a-T is performed1|=8,Σ|A-T22, visible character A and preset template character T2The subtraction error between the characters is less than the characters A and the preset template characters T1The difference between the two, so that the character A and the preset template character T can be considered2More similarly, thereby determining the preset template character T2Is the character a;
s6, writing the characters which are determined in the step S5 and are matched with the preset template characters into texts as the word contents and/or the symbol contents for the processing of the following step (1032);
and (1032) performing understanding and recognition processing of natural language on the text content and/or symbol content included in each image, so as to obtain the key feature information.
5. The intelligent medical guidance method of claim 4, wherein:
in the step (2), generating a matching relation library between different functional areas of the hospital and corresponding area information thereof according to the key feature information corresponding to each of the plurality of images specifically includes,
step (201), translating the key characteristic information corresponding to each image into at least one language text description content by a preset text translation neural network model, and performing adaptive correction processing on the text description content;
step (202), constructing a mapping relation between the text description content corresponding to each image and the functional area corresponding to each image;
and (203) generating a matching relation library between different functional areas of the hospital and corresponding area information thereof through a preset data processing neural network model according to the mapping relation so as to represent a one-to-one corresponding data structure relation between the different functional areas of the hospital and the corresponding area information thereof.
6. The intelligent medical guidance method of claim 4, wherein:
in the step (3), the operation of performing voice instruction broadcasting on different functional areas of the hospital according to the matching relationship library specifically comprises,
step (301), extracting the position of each functional area and the character description content of each functional area guide from the matching relation library;
step (302), translating the text description content into a voice signal of at least one language through a preset voice translation neural network model;
and (303) selecting a proper voice signal according to a function area guide request input from the outside at present, and playing the voice signal through the voice indication broadcasting operation.
CN201910803483.1A 2019-08-28 2019-08-28 Intelligent medical guide robot and intelligent medical guide method Active CN110718294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910803483.1A CN110718294B (en) 2019-08-28 2019-08-28 Intelligent medical guide robot and intelligent medical guide method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910803483.1A CN110718294B (en) 2019-08-28 2019-08-28 Intelligent medical guide robot and intelligent medical guide method

Publications (2)

Publication Number Publication Date
CN110718294A CN110718294A (en) 2020-01-21
CN110718294B true CN110718294B (en) 2022-04-01

Family

ID=69209554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910803483.1A Active CN110718294B (en) 2019-08-28 2019-08-28 Intelligent medical guide robot and intelligent medical guide method

Country Status (1)

Country Link
CN (1) CN110718294B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110060039A (en) * 2009-11-30 2011-06-08 동국대학교 산학협력단 Communication robot and controlling method therof
CN102313547A (en) * 2011-05-26 2012-01-11 东南大学 Vision navigation method of mobile robot based on hand-drawn outline semantic map
CN106767750A (en) * 2016-11-18 2017-05-31 北京光年无限科技有限公司 A kind of air navigation aid and system for intelligent robot
CN107538485A (en) * 2016-06-29 2018-01-05 沈阳新松机器人自动化股份有限公司 A kind of robot guidance method and system
CN107967473A (en) * 2016-10-20 2018-04-27 南京万云信息技术有限公司 Based on picture and text identification and semantic robot autonomous localization and navigation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110060039A (en) * 2009-11-30 2011-06-08 동국대학교 산학협력단 Communication robot and controlling method therof
CN102313547A (en) * 2011-05-26 2012-01-11 东南大学 Vision navigation method of mobile robot based on hand-drawn outline semantic map
CN107538485A (en) * 2016-06-29 2018-01-05 沈阳新松机器人自动化股份有限公司 A kind of robot guidance method and system
CN107967473A (en) * 2016-10-20 2018-04-27 南京万云信息技术有限公司 Based on picture and text identification and semantic robot autonomous localization and navigation
CN106767750A (en) * 2016-11-18 2017-05-31 北京光年无限科技有限公司 A kind of air navigation aid and system for intelligent robot

Also Published As

Publication number Publication date
CN110718294A (en) 2020-01-21

Similar Documents

Publication Publication Date Title
WO2022156640A1 (en) Gaze correction method and apparatus for image, electronic device, computer-readable storage medium, and computer program product
CN109146892A (en) A kind of image cropping method and device based on aesthetics
CN111325817A (en) Virtual character scene video generation method, terminal device and medium
CN112733797B (en) Method, device and equipment for correcting sight of face image and storage medium
CN110781964A (en) Human body target detection method and system based on video image
CN112052837A (en) Target detection method and device based on artificial intelligence
CN109493400A (en) Handwriting samples generation method, device, computer equipment and storage medium
CN113095434A (en) Target detection method and device, electronic equipment and storage medium
CN115311618A (en) Assembly quality inspection method based on deep learning and object matching
CN111104813A (en) Two-dimensional code image key point detection method and device, electronic equipment and storage medium
CN113160231A (en) Sample generation method, sample generation device and electronic equipment
CN113705510A (en) Target identification tracking method, device, equipment and storage medium
CN109785439B (en) Face sketch image generation method and related products
CN110718294B (en) Intelligent medical guide robot and intelligent medical guide method
CN113807185A (en) Data processing method and device
CN111212260B (en) Method and device for drawing lane line based on surveillance video
CN112257729A (en) Image recognition method, device, equipment and storage medium
CN115546221B (en) Reinforcing steel bar counting method, device, equipment and storage medium
TW202024994A (en) Image positioning system based on upsampling and method thereof
CN115760616A (en) Human body point cloud repairing method and device, electronic equipment and storage medium
CN113592881A (en) Image reference segmentation method and device, computer equipment and storage medium
CN113505844A (en) Label generation method, device, equipment, storage medium and program product
CN116366961A (en) Video conference method and device and computer equipment
Mishra et al. Environment descriptor for the visually impaired
Chen et al. An efficient and real-time emergency exit detection technology for the visually impaired people based on YOLOv5

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant