CN108877228B - A unmanned aerial vehicle for scenic spot guides - Google Patents

A unmanned aerial vehicle for scenic spot guides Download PDF

Info

Publication number
CN108877228B
CN108877228B CN201811013220.2A CN201811013220A CN108877228B CN 108877228 B CN108877228 B CN 108877228B CN 201811013220 A CN201811013220 A CN 201811013220A CN 108877228 B CN108877228 B CN 108877228B
Authority
CN
China
Prior art keywords
road section
scenic spot
intersection
unmanned aerial
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811013220.2A
Other languages
Chinese (zh)
Other versions
CN108877228A (en
Inventor
喻明明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Bohao Land Technology Development Co.,Ltd.
Original Assignee
Liaoning Bohao Land Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Bohao Land Technology Development Co ltd filed Critical Liaoning Bohao Land Technology Development Co ltd
Priority to CN201811013220.2A priority Critical patent/CN108877228B/en
Publication of CN108877228A publication Critical patent/CN108877228A/en
Application granted granted Critical
Publication of CN108877228B publication Critical patent/CN108877228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/012Measuring and analyzing of parameters relative to traffic conditions based on the source of data from other sources than vehicle or roadside beacons, e.g. mobile networks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an unmanned aerial vehicle for scenic spot guidance, which comprises: the device comprises a memory, a processor and a communication device, wherein the memory stores executable program codes and data, the communication device is used for the unmanned aerial vehicle to carry out communication interaction with other equipment, and the processor calls the executable program codes stored in the memory to execute the following steps: controlling the unmanned aerial vehicle to cruise regularly and acquiring intersection information of scenic spots; taking each road section behind the intersection as an acquisition road section; collecting pedestrian information on the collection road section; analyzing and aggregating the pedestrian information to obtain an aggregation result; and controlling the communication device to send the aggregation result as notification information. According to the invention, the unmanned aerial vehicle is used for shooting and analyzing the road sections behind the intersection in the scenic spot to obtain the analysis result, and the analysis result is notified, so that the pedestrians at the intersection in the scenic spot know the conditions of each road section behind the intersection, thereby guiding the pedestrians to select the road sections and playing a role in shunting in the scenic spot.

Description

A unmanned aerial vehicle for scenic spot guides
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle for scenic spot guidance.
Background
With the development of the unmanned aerial vehicle technology, the unmanned aerial vehicle is increasingly and widely applied to production and life of people, such as mapping, cruise monitoring and the like. With the vigorous development of unmanned aerial vehicle technology, a plurality of application cases about unmanned aerial vehicles emerge in the market. These applications have focused mainly on terrain mapping, terrain reconnaissance, three-dimensional modeling, and logistics. These applications bring many conveniences to the production and life of the public.
In tourist attractions, to enhance the tourism experience of the tourists, some attractions are often provided with guides to introduce the attractions to the tourists. However, many scenic spot units are prohibitive due to long training period and high employment cost of tour guide workers. At present, some organizations have proposed some robot-based schemes to explain and introduce scenic spots instead of tour guides, but the high cost and the relatively monotonous explanation of the robots make the schemes less advantageous than the traditional manual tour guide schemes. Unmanned aerial vehicle low in cost, home range is extensive, has high bandwagon effect simultaneously, and this makes unmanned aerial vehicle have in the sight spot guide field advantage that has been unique. Compare traditional artifical guide and robot guide, unmanned aerial vehicle guide has very obvious advantage, has higher application potential and commercial value.
Nevertheless, in the scenic spot, what unmanned aerial vehicle guide can do is only take the way and explain, can not make the interim route change for the visitor, and then leads to the visitor to block up, influences visitor's the experience of playing.
Disclosure of Invention
The invention aims to provide an unmanned aerial vehicle scenic spot shunting guidance method and an unmanned aerial vehicle aiming at the defects in the prior art.
The purpose of the invention is realized by the following technical scheme:
provided is an unmanned aerial vehicle scenic spot diversion guiding method, which comprises the following steps:
the unmanned aerial vehicle regularly cruises and acquires intersection information of scenic spots;
taking each road section behind the intersection as an acquisition road section;
collecting pedestrian information on the collection road section;
analyzing and aggregating the pedestrian information to obtain an aggregation result;
and sending the aggregation result as notification information.
Preferably, the unmanned aerial vehicle cruising at regular time and acquiring scenic spot intersection information includes:
acquiring a first scenic spot image through high-altitude shooting;
and carrying out image recognition on the first scenic spot image to determine the position of the intersection.
At the position of crossing, the pedestrian needs to select the route in scenic spot, consequently, unmanned aerial vehicle confirms behind the crossing position, can provide the reference for pedestrian's selection in advance, plays the effect of reposition of redundant personnel guide.
Preferably, the step of taking each road section behind the intersection as an acquisition road section comprises:
determining road section distribution of the acquired road sections by identifying the first scenic spot image;
and adjusting the shooting height and angle according to the road section distribution so that each acquisition road section can be acquired into the second scenic spot image.
Because the road section distribution in the scenic spot does not have regularity, so need adjust unmanned aerial vehicle's flying height and shooting angle, make the second scenic spot image of gathering contain the condition of each road section.
Preferably, the step of taking each road section behind the intersection as an acquisition road section comprises:
road section distribution information is preset in the scenic spot intersection information;
after the intersection position is determined, extracting road section distribution information in the intersection information, and determining road section distribution of the acquired road section;
and adjusting the shooting height and angle according to the road section distribution so that each acquisition road section can be acquired into the second scenic spot image.
Because the road section distribution in the scenic spot does not have regularity, so need adjust unmanned aerial vehicle's flying height and shooting angle, make the second scenic spot image of gathering contain the condition of each road section.
Preferably, the collecting pedestrian information on the collecting road section comprises:
and identifying the number of pedestrians on each acquisition road section in the second scenic spot image to obtain the number of pedestrians.
The number of pedestrians in the road is the number of pedestrians walking on the acquisition road section, and the number of pedestrians is used for describing the crowding condition of the acquisition road section so as to provide selection reference for the pedestrians at the intersection.
Preferably, the analyzing and aggregating the pedestrian information to obtain an aggregation result includes:
and carrying out density analysis on the number of the pedestrians to obtain the pedestrian density of each acquisition road section.
The pedestrian density is used for describing the crowding condition of each acquisition road section, so that pedestrians can intuitively know whether the acquisition road section meets the expectation of the pedestrians.
Preferably, the analyzing the density of the number of pedestrians to obtain the density of the pedestrians in each collection road section includes:
and performing density analysis on the head and tail conditions of the number of the pedestrians in each acquisition road section to obtain the head and tail pedestrian density of each acquisition road section.
Whether the pedestrian is suitable for entering the road section is judged according to the head and tail pedestrian density of each collected road section.
Preferably, the analyzing and aggregating the pedestrian information to obtain an aggregation result includes:
and analyzing the retention rate of the number of the pedestrians on the road section to obtain the retention rate of the pedestrians on each acquisition road section.
The unobstructed condition of each acquisition highway section is described to the rate of use staying, can provide the reference for the unobstructed highway section of pedestrian selection, avoids being detained in gathering the highway section to influence and play experience.
Preferably, the analyzing the retention rate of the number of pedestrians on the road section to obtain the retention rate of the pedestrians on each collecting road section includes:
setting a staying time threshold value for staying of the pedestrian;
judging whether the time for a certain pedestrian to stay in one place reaches a stay time threshold value, if so, recording as the number of the staying people plus 1;
and aggregating the number of the detained people and the total number of the pedestrians to obtain the detaining rate.
And a residence time threshold value of the pedestrian staying is set, so that the numerical value of the residence rate is more reliable.
Preferably, the sending the aggregation result as notification information includes:
and announcing the announcement information through an electronic bulletin board arranged at the intersection. The pedestrian is notified through the electronic bulletin board arranged at the intersection, so that the pedestrian can select the road section by himself.
There is also provided a drone for scenic spot guidance, comprising: the unmanned aerial vehicle comprises a memory, a processor and a communication device, wherein the memory is used for storing executable program codes and data, the communication device is used for the unmanned aerial vehicle to carry out communication interaction with other equipment, and the processor is used for calling the executable program codes stored in the memory and executing the following steps:
controlling the unmanned aerial vehicle to cruise regularly and acquiring intersection information of scenic spots;
taking each road section behind the intersection as an acquisition road section;
collecting pedestrian information on the collection road section;
analyzing and aggregating the pedestrian information to obtain an aggregation result;
controlling the communication device to transmit the aggregation result as notification information.
Preferably, the unmanned aerial vehicle further comprises an image acquisition device, and the mode of the processor for acquiring the intersection information of the scenic spot comprises:
controlling the image acquisition device to carry out high-altitude shooting to obtain a first scenic spot image;
and carrying out image recognition on the first scenic spot image to determine the position of the intersection.
Preferably, the processor uses each road segment behind the intersection as the collection road segment, and the method includes:
determining road section distribution of the acquired road sections by recognizing the scenic region images;
and adjusting the height and the angle shot by the image acquisition device according to the road section distribution so that each acquisition road section can be acquired into the second scenic spot image.
Preferably, the processor uses each road segment behind the intersection as the collection road segment, and the method includes:
road section distribution information is preset in the scenic spot intersection information;
after the intersection position is determined, extracting road section distribution information in the intersection information, and determining road section distribution of the acquired road section;
and adjusting the height and the angle shot by the image acquisition device according to the road section distribution so that each acquisition road section can be acquired into the second scenic spot image.
Preferably, the mode of collecting pedestrian information on the collection road section by the processor includes:
and identifying the number of pedestrians on each acquisition road section in the second scenic spot image to obtain the number of pedestrians.
Preferably, the analyzing and aggregating of the pedestrian information by the processor to obtain an aggregation result includes:
and carrying out density analysis on the number of the pedestrians to obtain the pedestrian density of each acquisition road section.
Preferably, the density analysis of the number of people who are walking by the processor to obtain the pedestrian density of each acquisition section includes:
and performing density analysis on the head and tail conditions of the number of the pedestrians in each acquisition road section to obtain the head and tail pedestrian density of each acquisition road section.
Preferably, the analyzing and aggregating of the pedestrian information by the processor to obtain an aggregation result includes:
and analyzing the retention rate of the number of the pedestrians on the road section to obtain the retention rate of the pedestrians on each acquisition road section.
Preferably, the processor analyzes the retention rate of the number of pedestrians on the road section to obtain the retention rate of the pedestrians on each collecting road section, and the method includes:
setting a staying time threshold value for staying of the pedestrian;
judging whether the time for a certain pedestrian to stay in one place reaches a stay time threshold value, if so, recording as the number of the staying people plus 1;
and aggregating the number of the detained people and the total number of the pedestrians to obtain the detaining rate.
Preferably, the manner in which the processor controls the communication device to transmit the aggregation result as notification information includes:
and controlling the communication device to send the notification information to an electronic bulletin board arranged at the intersection for notification.
The invention has the following beneficial effects: the unmanned aerial vehicle shoots and analyzes road sections behind the crossing in the scenic spot to obtain an analysis result, and the analysis result is notified, so that pedestrians at the crossing in the scenic spot know the conditions of all the road sections behind the crossing, and the pedestrians are guided to select the road sections, thereby playing a role in shunting in the scenic spot.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for obtaining intersection information according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for determining road segment distribution according to an embodiment of the present invention;
FIG. 4 is a flow chart of another method for determining road segment distribution in an embodiment of the present invention;
FIG. 5 is a flowchart illustrating an aggregate result obtaining process according to an embodiment of the present invention;
FIG. 6 is another flowchart of aggregate result acquisition according to an embodiment of the present invention;
FIG. 7 is a retention rate acquisition flow chart according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an unmanned aerial vehicle for scenic spot guidance according to an embodiment of the present invention.
Detailed Description
The following describes preferred embodiments of the present invention and those skilled in the art will be able to realize the invention by using the related art in the following and will more clearly understand the innovative features and the advantages brought by the present invention.
The invention provides an unmanned aerial vehicle scenic spot shunting guidance method, which firstly proposes an implementation environment of the invention for more clearly explaining the invention intention of the invention, and the implementation environment of the invention comprises the following steps: the system comprises a plurality of unmanned aerial vehicle terminals cruising in a scenic spot, wherein the unmanned aerial vehicle terminals are loaded with acquisition modules, the acquisition modules are used for acquiring scenic spot images, the acquisition modules can be cameras, and the unmanned aerial vehicle terminals can also be loaded with image processing modules for processing the scenic spot images acquired by the acquisition modules; the implementation environment further comprises: the electronic bulletin board of setting in the scenic spot, be provided with display module on the electronic bulletin board, certainly also can be the voice broadcast module for remind the pedestrian.
In some possible implementation environments, the implementation environment of the invention may further include: the base station transmits data to the plurality of unmanned aerial vehicle terminals cruising in the scenic spot, the data are processed by the base station, and the processed data are sent to the electronic bulletin board for notification.
As shown in fig. 1, the method comprises the steps of:
s1, the unmanned aerial vehicle cruises regularly and acquires intersection information of scenic spots;
furthermore, the unmanned aerial vehicle cruises in a specific area in a scenic spot, at least one unmanned aerial vehicle is configured in each specific area, and two unmanned aerial vehicles with different cruising directions are preferably configured in each specific area.
It should be noted that the scenic spot intersection information may include the number of road segments connected at the intersection and the direction of the road segments.
S2, taking each road section behind the intersection as an acquisition road section;
it should be noted that each road segment behind the intersection refers to a road segment extending along the cruising direction of the unmanned aerial vehicle, and the unmanned aerial vehicle is separated from the road segment by the intersection.
Further, after the unmanned aerial vehicle flies through the intersection, the area of the road section is collected, the collected road section is collected in time, and particularly, the collected road section is continuously collected through a camera carried on the unmanned aerial vehicle.
S3, collecting pedestrian information on the collection road section;
further, the collected pedestrian information is mainly the number of pedestrians, the number of people in the image is identified by identifying the collected image, and specifically, the head in the collected image is identified to identify the specific number of people.
As a possible embodiment, the identification of the number of people may be performed using infrared imaging.
S4, analyzing and aggregating the pedestrian information to obtain an aggregation result;
it should be noted that, a processing module may be configured on the unmanned aerial vehicle to analyze and aggregate the pedestrian information, or the pedestrian information may be sent to the base station, and the processing module in the base station analyzes and aggregates the pedestrian information.
And S5, sending the aggregation result as notification information.
Further, the aggregation result may be sent by an unmanned aerial vehicle configured with the processing module, or the base station may send the aggregation result after analyzing and aggregating the pedestrian information. If the processing module is configured on the unmanned aerial vehicle to analyze and aggregate the pedestrian information, the transmission path of the information can be reduced, and the transmission timeliness is further improved; and if the pedestrian information is analyzed and aggregated at the base station, the load of the unmanned aerial vehicle can be reduced, and the stability of signal transmission is improved.
Overall speaking, the unmanned aerial vehicle shoots and analyzes road sections behind the intersection in the scenic spot to obtain an analysis result, and the analysis result is notified, so that pedestrians at the intersection of the scenic spot know the conditions of each road section behind the intersection, and the pedestrians are guided to select the road sections, thereby playing a shunting role in the scenic spot.
As shown in fig. 2, in the embodiment of the present invention, the step S1 of the unmanned aerial vehicle regularly cruising and acquiring scenic spot intersection information includes:
s11, acquiring a first scenic spot image through high-altitude shooting; it should be noted that the first scenic spot image is used to identify intersections of scenic spots, and if the unmanned aerial vehicle cannot identify intersections of scenic spots, the remaining steps are not performed.
And S12, carrying out image recognition on the first scenic spot image, and determining the position of the intersection. Furthermore, intersection identification is carried out through the gray level of pixels in the image, an intersection is formed only in a place where a plurality of road sections meet, the gray level value of the identified road section is a gray level band in the first scenic spot image, and the intersection is formed when the gray level bands are crossed.
Furthermore, gray-scale band crossing images of all intersections are preset to form an intersection library, and intersection positions are determined through comparison between the gray-scale band crossing images in the first scenic spot image and the gray-scale band crossing images in the intersection library.
Of course, as a possible embodiment, the unmanned aerial vehicle may also detect its own geographic coordinates after flying to a certain position, detect corresponding intersection coordinates from a preset intersection geographic coordinate base, and extract intersection information. Thus, intersection recognition of the first scenic spot image is not required.
At the position of crossing, the pedestrian needs to select the route in scenic spot, consequently, unmanned aerial vehicle confirms behind the crossing position, can provide the reference for pedestrian's selection in advance, plays the effect of reposition of redundant personnel guide.
As shown in fig. 3, in the embodiment of the present invention, the step S2 of taking each road segment behind the intersection as an acquisition road segment includes:
s21, determining road section distribution of the acquired road sections by identifying the first scenic spot images; further, road section identification is carried out through the gray level of pixels in the image, the gray level value of the road section is identified to be a gray level band in the first scenic spot image, and the number of the detected gray level bands along the flight direction of the unmanned aerial vehicle in the first scenic spot image is the number of the acquired road sections.
And S22, adjusting the shooting height and angle according to the road section distribution so that each acquisition road section can be acquired into the second scenic spot image.
Furthermore, after the crossing is determined, the scenic spot is secondarily shot through the camera, and it should be noted that in the secondary shooting process, all the collected road sections are shot into the image to form a second scenic spot image, so that the height of the unmanned aerial vehicle needs to be adjusted, and the shooting angle also needs to be adjusted.
Certainly, in some possible embodiments, because the road section is too wide in distribution, one unmanned aerial vehicle can not finish shooting the second scenic spot image, at least one unmanned aerial vehicle can be added for shooting, and multiple shot images are spliced to form the second scenic spot image.
Because the road section distribution in the scenic spot does not have regularity, so need adjust unmanned aerial vehicle's flying height and shooting angle, make the second scenic spot image of gathering contain the condition of each road section.
As shown in fig. 4, as a possible embodiment, the step S2 of taking each road segment behind the intersection as an acquisition road segment includes:
s21a, road section distribution information is preset in the scenic spot intersection information;
s22a, after determining the position of the intersection, extracting road section distribution information in the intersection information, and determining the road section distribution of the acquired road section;
and S23a, adjusting the shooting height and angle according to the road section distribution so that each acquisition road section can be acquired into the second scenic spot image.
Furthermore, after the crossing is determined, the scenic spot is secondarily shot through the camera, and it should be noted that in the secondary shooting process, all the collected road sections are shot into the image to form a second scenic spot image, so that the height of the unmanned aerial vehicle needs to be adjusted, and the shooting angle also needs to be adjusted.
Certainly, in some possible embodiments, because the road section is too wide in distribution, one unmanned aerial vehicle can not finish shooting the second scenic spot image, at least one unmanned aerial vehicle can be added for shooting, and multiple shot images are spliced to form the second scenic spot image.
Because the road section distribution in the scenic spot does not have regularity, so need adjust unmanned aerial vehicle's flying height and shooting angle, make the second scenic spot image of gathering contain the condition of each road section.
In an embodiment of the present invention, the collecting pedestrian information on the collection road segment includes:
and identifying the number of pedestrians on each acquisition road section in the second scenic spot image to obtain the number of pedestrians.
The number of pedestrians in the road is the number of pedestrians walking on the acquisition road section, and the number of pedestrians is used for describing the crowding condition of the acquisition road section so as to provide selection reference for the pedestrians at the intersection.
As shown in fig. 5, in the embodiment of the present invention, the method further includes the steps of:
s1, the unmanned aerial vehicle cruises regularly and acquires intersection information of scenic spots;
furthermore, the unmanned aerial vehicle cruises in a specific area in a scenic spot, at least one unmanned aerial vehicle is configured in each specific area, and two unmanned aerial vehicles with different cruising directions are preferably configured in each specific area.
It should be noted that the scenic spot intersection information may include the number of road segments connected at the intersection and the direction of the road segments.
S2, taking each road section behind the intersection as an acquisition road section;
it should be noted that each road segment behind the intersection refers to a road segment extending along the cruising direction of the unmanned aerial vehicle, and the unmanned aerial vehicle is separated from the road segment by the intersection.
Further, after the unmanned aerial vehicle flies through the intersection, the area of the road section is collected, the collected road section is collected in time, and particularly, the collected road section is continuously collected through a camera carried on the unmanned aerial vehicle.
S3, collecting pedestrian information on the collection road section;
further, the collected pedestrian information is mainly the number of pedestrians, the number of people in the image is identified by identifying the collected image, and specifically, the head in the collected image is identified to identify the specific number of people.
As a possible embodiment, the identification of the number of people may be performed using infrared imaging.
S4, analyzing and aggregating the pedestrian information to obtain an aggregation result;
step S4 includes the steps of:
and S41, performing density analysis on the number of the pedestrians to obtain the pedestrian density of each acquisition road section.
The pedestrian density is used for describing the crowding condition of each acquisition road section, so that pedestrians can intuitively know whether the acquisition road section meets the expectation of the pedestrians.
It should be noted that, a processing module may be configured on the unmanned aerial vehicle to analyze and aggregate the pedestrian information, or the pedestrian information may be sent to the base station, and the processing module in the base station analyzes and aggregates the pedestrian information.
And S51, sending the pedestrian density as notification information.
Further, the aggregation result may be sent by an unmanned aerial vehicle configured with the processing module, or the base station may send the aggregation result after analyzing and aggregating the pedestrian information. If the processing module is configured on the unmanned aerial vehicle to analyze and aggregate the pedestrian information, the transmission path of the information can be reduced, and the transmission timeliness is further improved; and if the pedestrian information is analyzed and aggregated at the base station, the load of the unmanned aerial vehicle can be reduced, and the stability of signal transmission is improved.
In this embodiment of the present invention, the step S41 of performing density analysis on the number of pedestrians, and obtaining the pedestrian density of each acquisition road segment includes:
and performing density analysis on the head and tail conditions of the number of the pedestrians in each acquisition road section to obtain the head and tail pedestrian density of each acquisition road section.
Whether the pedestrian is suitable for entering the road section is judged according to the head and tail pedestrian density of each collected road section.
As shown in fig. 6, as a possible embodiment, the method further includes:
s1, the unmanned aerial vehicle cruises regularly and acquires intersection information of scenic spots;
furthermore, the unmanned aerial vehicle cruises in a specific area in a scenic spot, at least one unmanned aerial vehicle is configured in each specific area, and two unmanned aerial vehicles with different cruising directions are preferably configured in each specific area.
It should be noted that the scenic spot intersection information may include the number of road segments connected at the intersection and the direction of the road segments.
S2, taking each road section behind the intersection as an acquisition road section;
it should be noted that each road segment behind the intersection refers to a road segment extending along the cruising direction of the unmanned aerial vehicle, and the unmanned aerial vehicle is separated from the road segment by the intersection.
Further, after the unmanned aerial vehicle flies through the intersection, the area of the road section is collected, the collected road section is collected in time, and particularly, the collected road section is continuously collected through a camera carried on the unmanned aerial vehicle.
S3, collecting pedestrian information on the collection road section;
further, the collected pedestrian information is mainly the number of pedestrians, the number of people in the image is identified by identifying the collected image, and specifically, the head in the collected image is identified to identify the specific number of people.
As a possible embodiment, the identification of the number of people may be performed using infrared imaging.
S4, analyzing and aggregating the pedestrian information to obtain an aggregation result;
step S4 includes the steps of:
and S41a, analyzing the retention rate of the number of the pedestrians on the road to obtain the retention rate of the pedestrians on each collecting road.
The unobstructed condition of each acquisition highway section is described to the rate of use staying, can provide the reference for the unobstructed highway section of pedestrian selection, avoids being detained in gathering the highway section to influence and play experience.
It should be noted that, a processing module may be configured on the unmanned aerial vehicle to analyze and aggregate the pedestrian information, or the pedestrian information may be sent to the base station, and the processing module in the base station analyzes and aggregates the pedestrian information.
S51a, sending the staying rate as notification information.
Further, the aggregation result may be sent by an unmanned aerial vehicle configured with the processing module, or the base station may send the aggregation result after analyzing and aggregating the pedestrian information. If the processing module is configured on the unmanned aerial vehicle to analyze and aggregate the pedestrian information, the transmission path of the information can be reduced, and the transmission timeliness is further improved; and if the pedestrian information is analyzed and aggregated at the base station, the load of the unmanned aerial vehicle can be reduced, and the stability of signal transmission is improved.
As shown in fig. 7, in the retention rate embodiment of the present invention, the analyzing the retention rate of the number of pedestrians in the road to obtain the retention rate of the pedestrian in each collection section includes:
s41a1, setting a staying time threshold value for the pedestrian to stay; it should be noted that, the staying time threshold of the pedestrian refers to a time for the pedestrian to stay in a certain small area, and the time may be preset, for example, three minutes or five minutes, and the specific time is not limited in this embodiment.
S41a2, judging whether the staying time of a certain pedestrian in one place reaches a staying time threshold value, and if so, recording the number of the staying people plus 1;
s41a3, aggregating the number of the detained people and the total number of the pedestrians to obtain the detention rate, specifically, the detention rate is obtained by using the detention number to the total number of the pedestrians, for example, 50 of 100 pedestrians reach the detention condition, the detention number is 50, and the detention rate is 50 by using 50 to 100. The higher the retention rate, the less clear the route and the more likely the route will be clogged.
And a residence time threshold value of the pedestrian staying is set, so that the numerical value of the residence rate is more reliable.
In this embodiment of the present invention, the sending the aggregation result as notification information includes:
the notification information is notified through an electronic bulletin board arranged at the intersection. The pedestrian is notified through the electronic bulletin board arranged at the intersection, so that the pedestrian can select the road section by himself.
Further, be provided with display module on the bulletin board, certainly also can be the voice broadcast module for remind the pedestrian.
The embodiment of the invention also provides an unmanned aerial vehicle for guiding scenic spots, which can be used for executing the unmanned aerial vehicle scenic spot shunting guiding method provided by the embodiment. As shown in fig. 8, the drone may include at least: memory 10, at least one processor 20, such as a CPU (Central Processing Unit), at least one communication device 30, may be used for the communication interaction of the drone with other devices. Wherein the memory 10, the processor 20 and the communication device 30 may be communicatively connected by one or more buses. Those skilled in the art will appreciate that the structure of the drone shown in fig. 8 is not intended to limit embodiments of the present invention, and may be a bus or star configuration, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
The memory 10 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 10 may optionally be at least one memory device located remotely from the processor 20. Memory 10 may be used to store executable program code and data and embodiments of the present invention are not limited in this respect.
In the drone shown in fig. 8, the processor 20 may be configured to call the executable program code stored in the memory 10 to perform the following steps:
controlling the unmanned aerial vehicle to cruise regularly and acquiring intersection information of scenic spots;
taking each road section behind the intersection as an acquisition road section;
collecting pedestrian information on a collection road section;
analyzing and aggregating the pedestrian information to obtain an aggregation result;
the control communication device 30 transmits the aggregation result as notification information.
Optionally, the unmanned aerial vehicle shown in fig. 8 may further include an image acquisition device (not shown in the figure), such as a camera; the manner in which the processor 20 obtains the scenic spot intersection information may include:
controlling an image acquisition device to carry out high-altitude shooting to obtain a first scenic spot image;
and carrying out image recognition on the first scenic spot image to determine the position of the intersection.
Optionally, the manner in which the processor 20 uses each road segment behind the intersection as the acquisition road segment may include:
determining road section distribution of the acquired road sections by recognizing the scenic region images;
and adjusting the height and angle shot by the image acquisition device according to the road section distribution so that each acquisition road section can be acquired into the second scenic spot image.
Optionally, the manner in which the processor 20 uses each road segment behind the intersection as the acquisition road segment may include:
road section distribution information is preset in the scenic spot intersection information;
after the intersection position is determined, extracting road section distribution information in the intersection information, and determining road section distribution of the acquired road section;
and adjusting the height and angle shot by the image acquisition device according to the road section distribution so that each acquisition road section can be acquired into the second scenic spot image.
Optionally, the manner of collecting the pedestrian information on the collection road segment by the processor 20 may include:
and identifying the number of pedestrians on each acquisition road section in the second scenic spot image to obtain the number of pedestrians.
Optionally, the analyzing and aggregating the pedestrian information by the processor 20 to obtain an aggregation result may include:
and carrying out density analysis on the number of pedestrians to obtain the pedestrian density of each acquisition road section.
Optionally, the density analysis of the number of pedestrians performed by the processor 20 to obtain the pedestrian density of each collected road segment may include:
and performing density analysis on the head and tail conditions of the number of pedestrians in each acquisition road section to obtain the head and tail pedestrian density of each acquisition road section.
Optionally, the analyzing and aggregating the pedestrian information by the processor 20 to obtain an aggregation result may include:
and analyzing the retention rate of the number of the pedestrians in the road section to obtain the retention rate of the pedestrians in each acquisition road section.
Optionally, the processor 20 analyzes the retention rate of the number of pedestrians on the road, and the manner of obtaining the retention rate of the pedestrians on each acquisition section may include:
setting a staying time threshold value for staying of the pedestrian;
judging whether the time for a certain pedestrian to stay in one place reaches a stay time threshold value, if so, recording as the number of the staying people plus 1;
and aggregating the number of the detained people and the total number of the pedestrians to obtain the detaining rate.
Optionally, the manner in which the processor 20 controls the communication device 30 to send the aggregation result as the notification information may include:
the control communication device 30 transmits the notification information to an electronic bulletin board provided at the intersection to notify.
The unmanned aerial vehicle shown in fig. 8 is implemented, the unmanned aerial vehicle is used for shooting and analyzing road sections behind the intersection in the scenic spot to obtain an analysis result, and the analysis result is notified, so that pedestrians at the intersection of the scenic spot can know the conditions of all road sections behind the intersection, and the pedestrians are guided to select the road sections, and the shunting effect in the scenic spot is achieved.
The foregoing is a more detailed description of the present invention in connection with specific preferred embodiments thereof, and it is not intended that the specific embodiments of the present invention be limited to these descriptions. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (8)

1. A unmanned aerial vehicle for scenic spot guidance, comprising: the unmanned aerial vehicle comprises a memory, a processor and a communication device, wherein the memory is used for storing executable program codes and data, the communication device is used for the unmanned aerial vehicle to carry out communication interaction with other equipment, and the processor is used for calling the executable program codes stored in the memory and executing the following steps:
controlling the unmanned aerial vehicle to cruise regularly and acquiring intersection information of scenic spots;
taking each road section behind the intersection as an acquisition road section;
collecting pedestrian information on the collection road section;
analyzing and aggregating the pedestrian information to obtain an aggregation result;
controlling the communication device to send the aggregation result as notification information;
wherein, unmanned aerial vehicle still includes image acquisition device, the mode that the treater acquireed scenic spot crossing information includes:
controlling the image acquisition device to carry out high-altitude shooting to obtain a first scenic spot image;
carrying out image recognition on the first scenic spot image to determine the position of the intersection;
the processor takes each road section behind the intersection as a collection road section, and the method comprises the following steps:
road section distribution information is preset in the scenic spot intersection information;
after the intersection position is determined, extracting road section distribution information in the intersection information, and determining road section distribution of the acquired road section;
and adjusting the height and the angle shot by the image acquisition device according to the road section distribution so that each acquisition road section can be acquired into the second scenic spot image.
2. A drone for scenic spot directions as claimed in claim 1, wherein the processor includes, with each segment behind the intersection as an acquisition segment:
determining road section distribution of the acquired road sections by recognizing the scenic region images;
and adjusting the height and the angle shot by the image acquisition device according to the road section distribution so that each acquisition road section can be acquired into the second scenic spot image.
3. A drone for scenic guidance as claimed in claim 1 or 2, wherein the means by which the processor collects pedestrian information on the collection road segment comprises:
and identifying the number of pedestrians on each acquisition road section in the second scenic spot image to obtain the number of pedestrians.
4. A drone for scenic spot directions as claimed in claim 3, wherein the processor performs analytics aggregation of the pedestrian information in a manner that results in an aggregated result comprising:
and carrying out density analysis on the number of the pedestrians to obtain the pedestrian density of each acquisition road section.
5. The drone for scenic spot directions of claim 4, wherein the processor performs density analysis on the number of people in transit, the manner of obtaining pedestrian density for each acquisition segment comprising:
and performing density analysis on the head and tail conditions of the number of the pedestrians in each acquisition road section to obtain the head and tail pedestrian density of each acquisition road section.
6. The drone for scenic spot directions as recited in claim 5, wherein the processor performs analytics aggregation on the pedestrian information in a manner that results in an aggregated result comprising:
and analyzing the retention rate of the number of the pedestrians on the road section to obtain the retention rate of the pedestrians on each acquisition road section.
7. The drone for scenic spot directions of claim 6, wherein the processor performs retention rate analysis on the number of people in transit, the manner of obtaining the retention rate of pedestrians for each acquisition segment comprising:
setting a staying time threshold value for staying of the pedestrian;
judging whether the time for a certain pedestrian to stay in one place reaches a stay time threshold value, if so, recording as the number of the staying people plus 1;
and aggregating the number of the detained people and the total number of the pedestrians to obtain the detaining rate.
8. A drone for scenic spot directions as defined in claim 1, wherein the means by which the processor controls the communication device to send the aggregated results as announcement information includes:
and controlling the communication device to send the notification information to an electronic bulletin board arranged at the intersection for notification.
CN201811013220.2A 2018-08-31 2018-08-31 A unmanned aerial vehicle for scenic spot guides Active CN108877228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811013220.2A CN108877228B (en) 2018-08-31 2018-08-31 A unmanned aerial vehicle for scenic spot guides

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811013220.2A CN108877228B (en) 2018-08-31 2018-08-31 A unmanned aerial vehicle for scenic spot guides

Publications (2)

Publication Number Publication Date
CN108877228A CN108877228A (en) 2018-11-23
CN108877228B true CN108877228B (en) 2021-04-09

Family

ID=64322640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811013220.2A Active CN108877228B (en) 2018-08-31 2018-08-31 A unmanned aerial vehicle for scenic spot guides

Country Status (1)

Country Link
CN (1) CN108877228B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109760847A (en) * 2019-03-27 2019-05-17 李良杰 Tour guide's unmanned plane
CN115503956A (en) * 2022-11-04 2022-12-23 武汉红色智旅文化科技有限公司 Sightseeing line navigation unmanned aerial vehicle system
CN116774734B (en) * 2023-08-24 2023-10-24 北京中景合天科技有限公司 Unmanned aerial vehicle-based digital twin patrol method for intelligent tourist attraction

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101325690A (en) * 2007-06-12 2008-12-17 上海正电科技发展有限公司 Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow
CN101464944A (en) * 2007-12-19 2009-06-24 中国科学院自动化研究所 Crowd density analysis method based on statistical characteristics
CN103593991A (en) * 2013-11-20 2014-02-19 东莞中国科学院云计算产业技术创新与育成中心 Traffic evacuation guide system and traffic evacuation method thereof
CN104036352A (en) * 2014-06-09 2014-09-10 陕西师范大学 Space-time regulation and emergency guidance system and method for scenic spot tourists
CN105843183A (en) * 2016-03-10 2016-08-10 赛度科技(北京)有限责任公司 Integrated management system for UAV based on 4G/WIFI network communication technology
CN105898216A (en) * 2016-04-14 2016-08-24 武汉科技大学 Method of counting number of people by using unmanned plane
CN107730427A (en) * 2017-10-09 2018-02-23 安徽畅通行交通信息服务有限公司 A kind of scenic spot Traffic monitoring management system
CN108253957A (en) * 2017-12-29 2018-07-06 广州亿航智能技术有限公司 Route guidance method, unmanned plane, server and system based on unmanned plane
CN108255942A (en) * 2016-12-29 2018-07-06 斯凯通达有限公司 The method of facility number capacity in configuration skifield, amusement park or gymnasium
CN108388838A (en) * 2018-01-26 2018-08-10 重庆邮电大学 Unmanned plane population surveillance system and monitoring method over the ground

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8643715B2 (en) * 2010-09-25 2014-02-04 Kyu Hwang Cho Real-time remote-viewing digital compass
WO2016132295A1 (en) * 2015-02-19 2016-08-25 Francesco Ricci Guidance system and automatic control for vehicles
CN106828927A (en) * 2015-12-04 2017-06-13 中华映管股份有限公司 Using nurse's system of unmanned vehicle
CN106828928A (en) * 2016-12-29 2017-06-13 合肥旋极智能科技有限公司 A kind of unmanned plane search and rescue system based on Internet of Things
CN207409136U (en) * 2017-11-22 2018-05-25 成都大学 Traffic monitoring system based on aircraft

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101325690A (en) * 2007-06-12 2008-12-17 上海正电科技发展有限公司 Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow
CN101464944A (en) * 2007-12-19 2009-06-24 中国科学院自动化研究所 Crowd density analysis method based on statistical characteristics
CN103593991A (en) * 2013-11-20 2014-02-19 东莞中国科学院云计算产业技术创新与育成中心 Traffic evacuation guide system and traffic evacuation method thereof
CN104036352A (en) * 2014-06-09 2014-09-10 陕西师范大学 Space-time regulation and emergency guidance system and method for scenic spot tourists
CN105843183A (en) * 2016-03-10 2016-08-10 赛度科技(北京)有限责任公司 Integrated management system for UAV based on 4G/WIFI network communication technology
CN105898216A (en) * 2016-04-14 2016-08-24 武汉科技大学 Method of counting number of people by using unmanned plane
CN108255942A (en) * 2016-12-29 2018-07-06 斯凯通达有限公司 The method of facility number capacity in configuration skifield, amusement park or gymnasium
CN107730427A (en) * 2017-10-09 2018-02-23 安徽畅通行交通信息服务有限公司 A kind of scenic spot Traffic monitoring management system
CN108253957A (en) * 2017-12-29 2018-07-06 广州亿航智能技术有限公司 Route guidance method, unmanned plane, server and system based on unmanned plane
CN108388838A (en) * 2018-01-26 2018-08-10 重庆邮电大学 Unmanned plane population surveillance system and monitoring method over the ground

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"智慧景区概念在旅游景区的应用模式探究";刘喆 等;《度假旅游》;20180315;正文第6节 *

Also Published As

Publication number Publication date
CN108877228A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
US10928829B2 (en) Detection of traffic dynamics and road changes in autonomous driving
CN111145545B (en) Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
US10699125B2 (en) Systems and methods for object tracking and classification
CN108877228B (en) A unmanned aerial vehicle for scenic spot guides
CN110688992A (en) Traffic signal identification method and device, vehicle navigation equipment and unmanned vehicle
CN111859778B (en) Parking model generation method and device, electronic device and storage medium
CN109116846B (en) Automatic driving method, device, computer equipment and storage medium
CN106092123B (en) A kind of video navigation method and device
CN109358648B (en) Unmanned aerial vehicle autonomous flight method and device and unmanned aerial vehicle
CN110188482B (en) Test scene creating method and device based on intelligent driving
US20200200545A1 (en) Method and System for Determining Landmarks in an Environment of a Vehicle
EP3690728B1 (en) Method and device for detecting parking area using semantic segmentation in automatic parking system
WO2020007589A1 (en) Training a deep convolutional neural network for individual routes
CN103377558A (en) System and method for managing and controlling traffic flow
CN111444798A (en) Method and device for identifying driving behavior of electric bicycle and computer equipment
US20210295067A1 (en) System and method for localization of traffic signs
CN115294544A (en) Driving scene classification method, device, equipment and storage medium
CN113792106A (en) Road state updating method and device, electronic equipment and storage medium
CN113111876A (en) Method and system for obtaining evidence of traffic violation
US10846544B2 (en) Transportation prediction system and method
EP3349201B1 (en) Parking assist method and vehicle parking assist system
CN112537301B (en) Driving reference object selection method and device for intelligent driving traffic carrier
CN113221800A (en) Monitoring and judging method and system for target to be detected
TWI743637B (en) Traffic light recognition system and method thereof
CN113763704A (en) Vehicle control method, device, computer readable storage medium and processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210303

Address after: 430000 room 10, 11 / F, block B, 628 Wuluo Road, Zhongnan road street, Wuchang District, Wuhan City, Hubei Province

Applicant after: Wuhan Zijun Information Technology Co.,Ltd.

Address before: Room 403, tiandixuan, block a, sunshine tiandijiayuan, No.2 Beili North Road, Cuizhu street, Luohu District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN YANBEN BRAND DESIGN Co.,Ltd.

TA01 Transfer of patent application right

Effective date of registration: 20210324

Address after: 110000 room 911, No.56, Huanghe South Street, Huanggu District, Shenyang City, Liaoning Province

Applicant after: Liaoning Bohao Land Technology Development Co.,Ltd.

Address before: 430000 room 10, 11 / F, block B, 628 Wuluo Road, Zhongnan road street, Wuchang District, Wuhan City, Hubei Province

Applicant before: Wuhan Zijun Information Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210918

Address after: 2702, building 3, bairuijing phase III, baotongsi Road, Zhongnan Road, Wuchang District, Wuhan City, Hubei Province

Patentee after: He Yonggang

Address before: 110000 room 911, No.56, Huanghe South Street, Huanggu District, Shenyang City, Liaoning Province

Patentee before: Liaoning Bohao Land Technology Development Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211022

Address after: 110031 room 911, No. 56, Huanghe South Street, Huanggu District, Shenyang City, Liaoning Province

Patentee after: Liaoning Bohao Land Technology Development Co.,Ltd.

Address before: 2702, building 3, bairuijing phase III, baotongsi Road, Zhongnan Road, Wuchang District, Wuhan City, Hubei Province

Patentee before: He Yonggang