CN113449549A - Prompt message generation method, device, equipment and storage medium - Google Patents

Prompt message generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN113449549A
CN113449549A CN202010216402.0A CN202010216402A CN113449549A CN 113449549 A CN113449549 A CN 113449549A CN 202010216402 A CN202010216402 A CN 202010216402A CN 113449549 A CN113449549 A CN 113449549A
Authority
CN
China
Prior art keywords
target
information
detection model
target object
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010216402.0A
Other languages
Chinese (zh)
Inventor
陈庆勇
马玉涛
桑建
莫小波
杜犁新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Chengdu ICT Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Chengdu ICT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Chengdu ICT Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010216402.0A priority Critical patent/CN113449549A/en
Publication of CN113449549A publication Critical patent/CN113449549A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • G01S15/08Systems for measuring distance only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses a prompt message generation method, a prompt message generation device, prompt message generation equipment and a storage medium. The method comprises the following steps: acquiring a target area image of an area where a target person is located; detecting a target object in the target area image by using a preset object detection model, and determining the position information and the type information of the target object; ranging a target object, and determining distance information between the target object and a target person; generating prompt information according to the position information, the type information and the distance information; the prompt information is used for prompting the target person to avoid the target object, so that the vision disorder person can go out without carrying a large-size travel tool, the travel cost is reduced, the accuracy and comprehensiveness of obstacle avoidance are improved, and the safety travel of the vision disorder person is better guaranteed.

Description

Prompt message generation method, device, equipment and storage medium
Technical Field
The invention belongs to the technical field of visual identification, and particularly relates to a prompt message generation method, a prompt message generation device, prompt message generation equipment and a storage medium.
Background
The vision disorder person refers to a person with impaired vision function.
At present, in order to avoid danger in the traveling process of the vision disorder person, a specially-assigned person is required to accompany the vision disorder person for traveling. In order to save human resources for the vision disorder person to go out, an auxiliary going-out system designed for weak groups can be adopted to assist the vision disorder person to go out.
However, the auxiliary travel system is not only high in cost, but also large in size and inconvenient to carry when going out.
Disclosure of Invention
The embodiment of the invention provides a prompt message generation method, a prompt message generation device, prompt message generation equipment and a storage medium, and solves the problems that an auxiliary travel system is high in cost, large in size and inconvenient to carry during travel.
In a first aspect, a method for generating a prompt message is provided, where the method includes:
acquiring a target area image of an area where a target person is located;
detecting a target object in the target area image by using a preset object detection model, and determining the position information and the type information of the target object;
ranging a target object, and determining distance information between the target object and a target person;
generating prompt information according to the position information, the type information and the distance information; the prompt information is used for prompting the target person to avoid the target object.
In one possible implementation manner, detecting a target object in a target area image by using a pre-established object detection model, and determining position information and type information of the target object includes:
respectively inputting the target area images into an object detection model, and determining a plurality of first target frames of the target object in the target area images; the first target frames are different in size and aspect ratio;
determining scores and position offsets of a plurality of first target boxes;
determining a plurality of second target frames according to the scores and the position offsets of the plurality of first target frames; the plurality of first target frames comprises a plurality of second target frames;
position information and type information of the target object is determined from the plurality of second target frames using a non-maximum suppression algorithm.
In one possible implementation, the method further includes:
storing a target area image, and storing the position information and the type information of a target object in the target area image into an image database;
when the number of the target area images in the image database is larger than a preset number threshold, training an object detection model based on the stored position information and type information of the target object of the target area images to obtain a new object detection model;
detecting a target object in the target area image by using the new object detection model, and determining the position information and the type information of the target object;
ranging a target object, and determining distance information between the target object and a target person;
generating prompt information according to the position information, the type information and the distance information; the prompt information is used for prompting the target person to avoid the target object.
In one possible implementation, the method further includes:
and deleting the target area image and the position information and the type information of the target object in the target area image which are stored in the image database.
In one possible implementation, the method further includes:
acquiring a sample training set; the sample training set comprises a plurality of training samples, each training sample comprising a sample region image and label information for each sample region image;
for each training sample, respectively executing the following steps from one step to three:
the method comprises the following steps: inputting the sample area image into a Solid State Disk (SSD) algorithm detection model to obtain type information and position information of a sample object in the sample area image;
step two: determining a loss function value of the SSD algorithm detection model according to the type information and the position information of the sample object in the sample region image and the label information of the sample region image;
step three: and training an SSD algorithm detection model according to the loss function value to obtain an object detection model.
In a possible implementation manner, training an SSD algorithm detection model according to a loss function value to obtain an object detection model includes:
and when the loss function value of the SSD algorithm detection model does not meet the preset training stop condition, adjusting the parameters of the SSD algorithm detection model, and training the adjusted SSD algorithm detection model by using the training sample set until the preset training stop condition is met, so as to obtain the object detection model.
In one possible implementation, the method further includes: the loss functions of the SSD algorithm detection model include a logistic regression loss function and a least squares loss function.
In a second aspect, a prompt message generating apparatus is provided, the apparatus including:
the acquisition module is used for acquiring a target area image of an area where a target person is located;
the first determination module is used for detecting a target object in the target area image by using a preset object detection model and determining the position information and the type information of the target object;
the second determining module is used for measuring the distance of the target object and determining the distance information between the target object and the target person;
the generating module is used for generating prompt information according to the position information, the type information and the distance information; the prompt information is used for prompting the target person to avoid the target object.
In a third aspect, an electronic device is provided, the device comprising: a processor and a memory storing computer program instructions; the processor, when executing the computer program instructions, implements a method as in the first aspect or any possible implementation of the first aspect.
In a fourth aspect, there is provided a computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method as in the first aspect or any possible implementation of the first aspect.
Based on the provided prompt information generation method, device, equipment and storage medium, acquiring a target area image of an area where a target person is located; detecting a target object in the target area image by using a preset object detection model, and determining the position information and the type information of the target object; ranging a target object, and determining distance information between the target object and a target person; generating prompt information according to the position information, the type information and the distance information; the prompt information is used for prompting the target person to avoid the target object, so that the vision disorder person can go out without carrying a large-size travel tool, the travel cost is reduced, the accuracy and comprehensiveness of obstacle avoidance are improved, and the safety travel of the vision disorder person is better guaranteed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a prompt message generation method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a prompt information generating apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely configured to illustrate the invention and are not configured to limit the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating the examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
At present, in order to avoid danger in the traveling process of the vision disorder person, a specially-assigned person is required to accompany the vision disorder person for traveling, and the road condition of the physical disorder person is identified by constructing barrier-free facilities such as blind roads, bright-colored indicator lamps, handrails and other obvious signs under the public environment. In order to save human resources for the vision disorder person to go out, an auxiliary going-out system designed for the weak group can be adopted to assist the vision disorder person to go out. However, the manual assistance depends on the accompany of a specially-assigned person, the traveling problem of the vision-impaired person is not fundamentally solved, and the manpower resource is wasted. In addition, the factors of common occupation, damage, unreasonable design and the like of public facilities are adopted, large-scale facility construction in domestic cities does not bring great help to people with visual impairment at present, and the construction of external facilities does not solve the travel problem fundamentally. The auxiliary traveling equipment is almost not available, and an auxiliary traveling system designed for the amblyopia group, such as a bus amblyopia group navigator, a mobile phone of the amblyopia group, a flashlight of the amblyopia group, a short-distance ultrasonic obstacle avoidance instrument, a navigation device of the amblyopia group, a navigator and the like, has single functions or high price, high cost or large volume and is inconvenient to travel.
Therefore, the embodiment of the invention provides a prompt information generation method, a prompt information generation device, equipment and a storage medium, so that people with visual impairment can go out without carrying any tool, the going-out cost is reduced, the accuracy and comprehensiveness of obstacle avoidance are improved, and the safe going-out of people with visual impairment is better ensured.
For convenience of understanding of the embodiment of the present invention, a detailed description is first given of the prompt information generation method provided in the embodiment of the present invention.
Fig. 1 is a schematic flow chart of a method for generating a prompt message according to an embodiment of the present invention.
As shown in fig. 1, a method for generating a prompt message according to an embodiment of the present invention includes:
s101: and acquiring a target area image of the area where the target person is located.
The target person refers to a person with impaired vision who is going out. The area in which the target person is located may be an area within a certain range of the position in which the target person is currently located. Here, in order to prompt the target person for obstacle avoidance, it is necessary to acquire a target area image of an area where the target person is located. The target area image may be acquired by a camera disposed at a traffic light.
In some embodiments, the target area image may include one target person or may include a plurality of target persons. The target area image may be an area corresponding to an image that can be acquired by the image acquisition apparatus.
S102: and detecting the target object in the target area image by using a preset object detection model, and determining the position information and the type information of the target object.
The object detection model is trained in advance. In some embodiments, the object detection model may be trained by:
specifically, a sample training set is obtained; the sample training set includes a plurality of training samples, each training sample including a sample region image and label information for each sample region image.
The label information includes position information and type information of the sample object around the visually impaired person in the sample region image. The type information of the object may be a person, a vehicle, a traffic sign, etc. For example, if there are a vehicle and a person near the person a in the sample area image, the tag information of the sample image is the type information of the objects around the person a: a vehicle and a person, and position information of the vehicle and position information of the person.
In the process of training the object detection model, a sample training set may be collected in advance to perform model training. Or in the process of generating the prompt information, storing the acquired target area image and the detected position information and type information of the object in the target area image in a database, and then performing model training by using the target area image stored in the database and the label information of the target area image as a training sample set.
Wherein, when the object detection model is trained for the first time, the sample training set is collected in advance.
The sample training set includes a plurality of training samples. For each training sample, respectively executing the following steps one to three:
the method comprises the following steps: and inputting the sample region image into an SSD algorithm detection model to obtain the type information and the position information of the sample object in the sample region image.
The top 5 network in the basic network of the VGG16 is used as the network structure of the SSD algorithm model. The lower 6-tier and 7-tier networks in the underlying network of VGG16 are convolutional layers of the SSD algorithm model. The SSD algorithmic model also includes 3 convolutional layers and one pooling layer. The loss functions of the SSD algorithm detection model include a logistic regression loss function and a least squares loss function.
Inputting the sample region image into the SSD algorithm model, the type information and the position information of the sample object in the sample region image can be extracted.
Step two: and determining a loss function value of the SSD algorithm detection model according to the type information and the position information of the sample object in the sample region image and the label information of the sample region image.
The tag information includes type information and position information of the sample object. And determining the type information and the position information of the sample object obtained according to the SSD algorithm model and the type information and the position information of the sample object in the label information, so that a loss function value of the SSD algorithm detection model can be determined.
Step three: and training an SSD algorithm detection model according to the loss function value to obtain an object detection model.
And after a loss function value corresponding to one training sample is determined, training an SSD algorithm detection model based on the loss function value to obtain an object detection model.
Specifically, when the loss function value of the SSD algorithm detection model does not meet the preset training stop condition, the parameters of the SSD algorithm detection model are adjusted, the adjusted SSD algorithm detection model is trained by using the training sample set until the preset training stop condition is met, and the object detection model is obtained.
The training stop condition of the model may be set in advance. For example, if the loss function value is smaller than a certain value, the model training is ended. And if the loss function value does not meet the preset training stop condition, adjusting parameters of the SSD algorithm detection model. And continuously training the SSD algorithm detection model after adjusting the parameters by selecting other training samples in the training sample set. And circulating the steps until the calculated loss function value meets the preset training stopping condition to obtain the object detection model.
Wherein the loss function of the object detection model comprises a logistic regression loss function and a least squares loss function.
And inputting the target area image into a trained object detection model, and determining the position information and the type information of the target object in the target area image.
The position information may be position information of the target object relative to the target person. For example, if the position information of the target object is a position right in front, it indicates that the target object is right in front of the target person.
Specifically, detecting a target object in a target area image by using a trained object detection model, and determining position information and type information of the target object, specifically including:
respectively inputting the target area images into an object detection model, and determining a plurality of first target frames of the target object in the target area images; the first target frames are different in size and aspect ratio from one another.
Determining scores and position offsets of a plurality of first target boxes;
determining a plurality of second target frames according to the scores and the position offsets of the plurality of first target frames; the plurality of first target frames comprises a plurality of second target frames;
position information and type information of the target object is determined from the plurality of second target frames using a non-maximum suppression algorithm.
When the tracking of the target person starts, the image of the region where the tracked target person is located is input into the object detection model, and the type information and the position information of the target object around the target person are detected. When the object detection model detects a target object, a plurality of first target frames with different scales and different aspect ratios are generated, and the first target frames are frames which may be the target object in the target area image. A plurality of different convolution filters are then applied to each convolution layer to derive each first target frame component and position offset. A series of second target boxes may be determined based on the scores and the position offsets. For example, a first target frame in which the score value and the position offset amount both satisfy a preset condition in the first target frame may be regarded as the second target frame. The plurality of second target frames are part of the plurality of first target frames. Here, there may be one target object or a plurality of target objects in the target area image. One target object may have one second target frame or may have a plurality of second target frames. In order to determine the final position information and type information of the target object, the final detection result may be determined by a non-maximum suppression algorithm, so as to obtain the position information and type information of the target object.
In some embodiments, in order to further improve the accuracy of the object detection performed by the object detection model, the object detection model may be continuously updated.
Specifically, a target area image is stored, and the position information and the type information of a target object in the target area image are stored in an image database;
when the number of the target area images in the image database is larger than a preset number threshold, training an object detection model based on the stored position information and type information of the target object of the target area images to obtain a new object detection model;
detecting a target object in the target area image by using the new object detection model, and determining the position information and the type information of the target object;
ranging a target object, and determining distance information between the target object and a target person;
generating prompt information according to the position information, the type information and the distance information; the prompt information is used for prompting the target person to avoid the target object.
After the target area image is acquired and the position information and the type information of the target person in the target area image are determined, the target area image is stored in an image database to serve as a new training sample, and when the number of the training samples reaches a certain number, the new training sample can be called from the database to retrain the object detection model. Performing object detection using the updated object detection model,
and after the object detection model is updated, deleting the target area image stored in the image database and the position information and the type information of the target object in the target area image.
S103: and (4) ranging the target object, and determining the distance information between the target object and the target person.
The ultrasonic ranging module is small in size and convenient to carry. The ultrasonic ranging can be adopted for ranging the target object. The ultrasonic ranging is mainly used for determining the distance of a target object according to the time of the ultrasonic wave after the ultrasonic wave is emitted, the ultrasonic wave meets the target object and is reflected back.
The formula for ultrasonic ranging is expressed as: l ═ C × T.
Wherein L is the measured distance length; c is the propagation speed of the ultrasonic wave in the air; t is the time difference of the measured distance propagation, which is half the length of time between the transmission time and the reception time. The ultrasonic velocity C is known to be 344m/s (20 ℃ room temperature).
In order to further improve the accuracy of ultrasonic ranging, the temperature of the environment in which the ultrasonic waves propagate needs to be considered. In the process of propagation of the ultrasonic wave, an error occurs in the propagation speed, the propagation speed is affected by the density of air, the higher the density of air is, the faster the propagation speed of the ultrasonic wave is, and the density of air has a close relationship with temperature, and the approximate formula is as follows: C-C0 +0.607 × T ℃, where C0 is the sonic velocity at zero degrees 332m/s and T is the actual temperature (deg.c).
S104: generating prompt information according to the position information, the type information and the distance information; the prompting information is used for prompting the target person to avoid the target object.
After the distance information of the target person and the target object is obtained through measurement, the prompt information is generated by combining the position information and the type information. For example, when a target object exists within 25cm-50cm, the type of the target object and the obstacle avoidance direction of the target object are reminded in time. When the target object appears within 25cm, the user should be immediately warned to stop the traveling in the original direction, and the type and the correct and safe traveling direction of the target object are reminded.
According to the prompt message generation method provided by the embodiment of the invention, the target area image of the area where the target person is located is obtained; detecting a target object in the target area image by using a preset object detection model, and determining the position information and the type information of the target object; ranging a target object, and determining distance information between the target object and a target person; generating prompt information according to the position information, the type information and the distance information; the prompt information is used for prompting the target person to avoid the target object, so that the vision-impaired person can go out without carrying a large-size travel tool, the travel cost is reduced, the accuracy and comprehensiveness of obstacle avoidance are improved, and the safety travel of the vision-impaired person is better guaranteed.
Fig. 2 is a schematic structural diagram of a prompt information generating apparatus according to an embodiment of the present invention.
As shown in fig. 2, a prompt information generating device provided in an embodiment of the present invention may include: the device comprises an acquisition module 201, a first determination module 202, a second determination module 203 and a generation module 204.
An obtaining module 201, configured to obtain a target area image of an area where a target person is located;
a first determining module 202, configured to detect a target object in the target area image using a preset object detection model, and determine location information and type information of the target object;
the second determining module 203 is configured to measure a distance between the target object and the target person and determine distance information between the target object and the target person;
the generating module 204 is configured to generate a prompt message according to the location information, the type information, and the distance information; the prompt information is used for prompting the target person to avoid the target object.
Optionally, in some embodiments of the present invention, the first determining module 202 is specifically configured to:
respectively inputting the target area images into an object detection model, and determining a plurality of first target frames of the target object in the target area images; the first target frames are different in size and aspect ratio;
determining scores and position offsets of a plurality of first target boxes;
determining a plurality of second target frames according to the scores and the position offsets of the plurality of first target frames; the plurality of first target frames comprises a plurality of second target frames;
position information and type information of the target object is determined from the plurality of second target frames using a non-maximum suppression algorithm.
Optionally, in some embodiments of the present invention, the apparatus further includes:
the storage module is used for storing the target area image, and the position information and the type information of the target object in the target area image are stored in the image database;
the training module is used for training the object detection model based on the stored position information and type information of the target object of the target area image to obtain a new object detection model when the number of the target area images in the image database is greater than a preset number threshold;
a first determining module 202, configured to detect a target object in the target area image using the new object detection model, and determine location information and type information of the target object;
the second determining module 203 is configured to measure a distance between the target object and the target person and determine distance information between the target object and the target person;
the generating module 204 is configured to generate a prompt message according to the location information, the type information, and the distance information; the prompt information is used for prompting the target person to avoid the target object.
Optionally, in some embodiments of the present invention, the apparatus further includes:
and the deleting module is used for deleting the target area image stored in the image database and the position information and the type information of the target object in the target area image.
Optionally, in some embodiments of the present invention, the apparatus further includes:
an obtaining module 201, configured to obtain a sample training set; the sample training set comprises a plurality of training samples, each training sample comprising a sample region image and label information for each sample region image;
the training module is used for respectively executing the following steps from one step to three steps for each training sample:
the method comprises the following steps: inputting the sample area image into a Solid State Disk (SSD) algorithm detection model to obtain type information and position information of a sample object in the sample area image;
step two: determining a loss function value of the SSD algorithm detection model according to the type information and the position information of the sample object in the sample region image and the label information of the sample region image;
step three: and training an SSD algorithm detection model according to the loss function value to obtain an object detection model.
Optionally, in some embodiments of the present invention, the training module is specifically configured to:
and when the loss function value of the SSD algorithm detection model does not meet the preset training stop condition, adjusting the parameters of the SSD algorithm detection model, and training the adjusted SSD algorithm detection model by using the training sample set until the preset training stop condition is met, so as to obtain the object detection model.
Optionally, in some embodiments of the present invention, the loss function of the SSD algorithm detection model includes a logistic regression loss function and a least squares loss function.
According to the prompt message generation method provided by the embodiment of the invention, the acquisition module is used for acquiring the target area image of the area where the target person is located; the first determination module is used for detecting a target object in the target area image by using a preset object detection model and determining the position information and the type information of the target object; the second determining module is used for measuring the distance of the target object and determining the distance information between the target object and the target person; the generating module is used for generating prompt information according to the position information, the type information and the distance information; the prompt information is used for prompting the target person to avoid the target object, so that the vision-impaired person can go out without carrying a large-size travel tool, the travel cost is reduced, the accuracy and comprehensiveness of obstacle avoidance are improved, and the safety travel of the vision-impaired person is better guaranteed.
The prompt information generation device provided by the embodiment of the invention executes each step in the method shown in fig. 1, and can achieve the technical effects that a person with visual impairment does not need to carry a large-volume travel tool when going out, the travel cost is reduced, the accuracy and comprehensiveness of obstacle avoidance are improved, and the safe travel of the person with visual impairment is better ensured, and for the sake of brief description, detailed description is omitted here.
Fig. 3 is a schematic diagram illustrating a hardware structure of an electronic device according to an embodiment of the present invention.
The electronic device may comprise a processor 301 and a memory 302 in which computer program instructions are stored.
In particular, the processor 301 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured as one or more Integrated circuits implementing embodiments of the present invention.
Memory 302 may include mass storage for data or instructions. By way of example, and not limitation, memory 302 may include a Hard Disk Drive (HDD), a floppy Disk Drive, flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. The storage 302 may include removable or non-removable (or fixed) media, where appropriate. The memory 302 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 302 is a non-volatile solid-state memory. In a particular embodiment, the memory 302 includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The processor 301 reads and executes the computer program instructions stored in the memory 302 to implement any one of the hint information generation methods in the embodiment shown in fig. 1.
In one example, the electronic device may also include a communication interface 303 and a bus 310. As shown in fig. 3, the processor 301, the memory 302, and the communication interface 303 are connected via a bus 310 to complete communication therebetween.
The communication interface 303 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiment of the present invention.
Bus 310 includes hardware, software, or both to couple the components of the electronic device to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hyper Transport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of these. Bus 310 may include one or more buses, where appropriate. Although specific buses have been described and shown in the embodiments of the invention, any suitable buses or interconnects are contemplated by the invention.
The electronic device may execute the prompt information generation method in the embodiment of the present invention, so as to implement the prompt information generation method described in conjunction with fig. 1.
In addition, in combination with the prompt information generating method in the foregoing embodiment, the embodiment of the present invention may provide a computer storage medium to implement. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement any of the hint information generation methods in the above embodiments.
It is to be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via a computer network, such as the internet, an intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working processes of the system, the module and the unit described above may refer to corresponding processes in the foregoing method embodiments, and no further description is provided herein. It should be understood that the scope of the present invention is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present invention.

Claims (10)

1. A method for generating prompt information is characterized in that the method comprises the following steps:
acquiring a target area image of an area where a target person is located;
detecting a target object in a target area image by using a preset object detection model, and determining position information and type information of the target object;
ranging the target object, and determining the distance information between the target object and the target person;
generating prompt information according to the position information, the type information and the distance information; the prompt information is used for prompting the target person to avoid the target object.
2. The method according to claim 1, wherein the detecting a target object in a target area image using a pre-established object detection model, determining position information and type information of the target object, comprises:
inputting the target area images into the object detection model respectively, and determining a plurality of first target frames of the target object in the target area images; the plurality of first target frames are different in size and aspect ratio;
determining scores and position offsets of the plurality of first target boxes;
determining a plurality of second target frames according to the scores and the position offsets of the plurality of first target frames; the plurality of first target boxes includes the plurality of second target boxes;
determining position information and type information of the target object from the plurality of second target frames using a non-maximum suppression algorithm.
3. The method of claim 1, further comprising:
saving the target area image, wherein the position information and the type information of the target object in the target area image are stored in an image database;
when the number of the target area images in the image database is larger than a preset number threshold, training the object detection model based on the stored position information and type information of the target object of the target area images to obtain a new object detection model;
detecting a target object in a target area image by using the new object detection model, and determining the position information and the type information of the target object;
ranging the target object, and determining the distance information between the target object and the target person;
generating prompt information according to the position information, the type information and the distance information; the prompt information is used for prompting the target person to avoid the target object.
4. The method of claim 3, further comprising:
and deleting the target area image and the position information and the type information of the target object in the target area image, which are saved in the image database.
5. The method according to any one of claims 1-4, further comprising:
acquiring a sample training set; the sample training set comprises a plurality of training samples, each of which comprises a sample region image and label information of each sample region image;
for each training sample, respectively executing the following steps from one step to three:
the method comprises the following steps: inputting the sample region image into a target identification SSD algorithm detection model to obtain type information and position information of a sample object in the sample region image;
step two: determining a loss function value of the SSD algorithm detection model according to the type information and the position information of the sample object in the sample region image and the label information of the sample region image;
step three: and training the SSD algorithm detection model according to the loss function value to obtain the object detection model.
6. The method of claim 5, wherein training the SSD algorithm detection model based on the loss function values to obtain the object detection model comprises:
and when the loss function value of the SSD algorithm detection model does not meet a preset training stop condition, adjusting the parameters of the SSD algorithm detection model, and training the adjusted SSD algorithm detection model by using the training sample set until the preset training stop condition is met, so as to obtain the object detection model.
7. The method of claim 6, wherein the loss function of the SSD algorithm detection model comprises a logistic regression loss function and a least squares loss function.
8. An apparatus for generating hint information, the apparatus comprising:
the acquisition module is used for acquiring a target area image of an area where a target person is located;
the device comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for detecting a target object in a target area image by using a preset object detection model and determining the position information and the type information of the target object;
the second determining module is used for measuring the distance of the target object and determining the distance information between the target object and the target person;
the generating module is used for generating prompt information according to the position information, the type information and the distance information; the prompt information is used for prompting the target person to avoid the target object.
9. An electronic device, characterized in that the device comprises: a processor and a memory storing computer program instructions; the processor, when executing the computer program instructions, implements the hint information generation method as claimed in any of claims 1-7.
10. A computer storage medium having computer program instructions stored thereon, which when executed by a processor implement the hint information generation method as claimed in any one of claims 1 to 7.
CN202010216402.0A 2020-03-25 2020-03-25 Prompt message generation method, device, equipment and storage medium Pending CN113449549A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010216402.0A CN113449549A (en) 2020-03-25 2020-03-25 Prompt message generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010216402.0A CN113449549A (en) 2020-03-25 2020-03-25 Prompt message generation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113449549A true CN113449549A (en) 2021-09-28

Family

ID=77806631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010216402.0A Pending CN113449549A (en) 2020-03-25 2020-03-25 Prompt message generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113449549A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797783A (en) * 2023-02-01 2023-03-14 北京有竹居网络技术有限公司 Method and device for generating barrier-free information, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301773A (en) * 2017-06-16 2017-10-27 上海肇观电子科技有限公司 A kind of method and device to destination object prompt message
CN109598742A (en) * 2018-11-27 2019-04-09 湖北经济学院 A kind of method for tracking target and system based on SSD algorithm
CN110208946A (en) * 2019-05-31 2019-09-06 京东方科技集团股份有限公司 A kind of wearable device and the exchange method based on wearable device
CN110857857A (en) * 2018-08-24 2020-03-03 福特全球技术公司 Navigation assistance for visually impaired persons

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301773A (en) * 2017-06-16 2017-10-27 上海肇观电子科技有限公司 A kind of method and device to destination object prompt message
CN110857857A (en) * 2018-08-24 2020-03-03 福特全球技术公司 Navigation assistance for visually impaired persons
CN109598742A (en) * 2018-11-27 2019-04-09 湖北经济学院 A kind of method for tracking target and system based on SSD algorithm
CN110208946A (en) * 2019-05-31 2019-09-06 京东方科技集团股份有限公司 A kind of wearable device and the exchange method based on wearable device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797783A (en) * 2023-02-01 2023-03-14 北京有竹居网络技术有限公司 Method and device for generating barrier-free information, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108256446B (en) Method, device and equipment for determining lane line in road
US10809723B2 (en) Method and apparatus for generating information
CN102800207A (en) System and method for traffic signal detection
KR20210151724A (en) Vehicle positioning method, apparatus, electronic device and storage medium and computer program
CN107894237A (en) Method and apparatus for showing navigation information
CN113052159A (en) Image identification method, device, equipment and computer storage medium
WO2019097422A2 (en) Method and system for enhanced sensing capabilities for vehicles
CN111401255B (en) Method and device for identifying bifurcation junctions
CN113449549A (en) Prompt message generation method, device, equipment and storage medium
US10922558B2 (en) Method and apparatus for localization using search space pruning
CN108680940B (en) Auxiliary positioning method and device for automatic driving vehicle
CN116007638B (en) Vehicle track map matching method and device, electronic equipment and vehicle
CN113256595A (en) Map creation method, map creation device, map creation equipment and computer storage medium
CN110647877B (en) Three-dimensional traffic facility positioning and deviation rectifying method and device based on neural network
CN117128950A (en) Point cloud map construction method and device, electronic equipment and storage medium
CN115512336B (en) Vehicle positioning method and device based on street lamp light source and electronic equipment
CN114677848B (en) Perception early warning system, method, device and computer program product
CN112572471B (en) Automatic driving method, device, electronic equipment and computer storage medium
CN112465822B (en) Method, device and equipment for detecting cluster fog and computer readable storage medium
CN114998861A (en) Method and device for detecting distance between vehicle and obstacle
US11790667B2 (en) Method and apparatus for localization using search space pruning
CN107092253A (en) Method, device and server for controlling unmanned vehicle
JP2017117414A (en) Information processing device, information processing method, and information processing program
JP2020086647A (en) Traffic flow prediction device, traffic flow prediction method, and program
CN114529768B (en) Method, device, electronic equipment and storage medium for determining object category

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210928