CN105045582B - A kind of state event location method based on mobile phone photograph behavior - Google Patents

A kind of state event location method based on mobile phone photograph behavior Download PDF

Info

Publication number
CN105045582B
CN105045582B CN201510394428.3A CN201510394428A CN105045582B CN 105045582 B CN105045582 B CN 105045582B CN 201510394428 A CN201510394428 A CN 201510394428A CN 105045582 B CN105045582 B CN 105045582B
Authority
CN
China
Prior art keywords
mobile phone
grid
event
positioning
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510394428.3A
Other languages
Chinese (zh)
Other versions
CN105045582A (en
Inventor
郭斌
陈荟慧
於志文
吴文乐
周兴社
王柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201510394428.3A priority Critical patent/CN105045582B/en
Publication of CN105045582A publication Critical patent/CN105045582A/en
Application granted granted Critical
Publication of CN105045582B publication Critical patent/CN105045582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)
  • Navigation (AREA)
  • Telephone Function (AREA)

Abstract

The invention discloses a kind of state event location method based on mobile phone photograph behavior, its step includes:Multiple pictures in same place using mobile phone photograph software shooting different directions, form photo group, and contextual information during records photographing photo, including the direction that camera is aligned when picture-taking position and mobile phone photograph;Estimation event generation area, and it is divided into grid;Contextual information based on photo, trapezoid area is established, calculate the bit-weight of the trapezoidal grid covered;The bit-weight of the grid of all photo coverings in photo group is calculated, the accumulation bit-weight of each grid is obtained, then according to the threshold value of setting, determines state event location grid collection, then determine the geographical coordinate of state event location.The present invention does not need photographer additionally to open application program, or performs specific operation, will not additionally increase the burden of witness;The positioning of event can improve the confidence level of photo, at the same be also the police, media and popular love scene of finding information is provided.

Description

Event positioning method based on mobile phone photographing behavior
Technical Field
The invention relates to the technical field of information processing, in particular to an event positioning method based on a space situation when a mobile phone is used for photographing.
Background
The method has become a popular behavior by recording the events happening around people by using the camera of the mobile phone, people often use the mobile phone to take pictures to record some emergent events in life, such as street fighting, fire and the like, the events are often concerned by the public and media, and a plurality of people can share the events in various social media (such as QQ group, weChat and microblog). The place where the event occurs is often described through words, and some social media support GPS positioning, so that the accuracy of event positioning is improved. In the prior art, information provided by individuals is mostly used, and information provided by groups under individual conditions, such as the GPS positions of a plurality of people, cannot know the relationship between the position of an event and the position of a reporter, so that accurate positioning cannot be realized.
In an article, "If You See se measuring, swipe towards It" of international conference UbiComp2013, crowdsourced Event Localization Using Smartphones ", authors locate the occurrence of an Event Using the orientation of multiple witnesses to the Event by: the witness draws a straight line on the screen with fingers on a flat mobile phone, the direction of the straight line is aligned with the occurrence place of the event, and then the geographical area covered by the straight lines is calculated, and the area covered most is the occurrence place of the event. The method has the following defects: the mobile phone is required to be horizontally placed and drawn into a straight line, the error is large, operation training is required, and only event positioning is realized, but no other information related to the event exists.
Disclosure of Invention
The photo is a powerful means for restoring the event, and the photo is a basic characteristic of online news, microblog and the like, so that the photographing behaviors are very common in the event scene, and the invention aims to accurately locate the occurrence place of the event by using the photographing behaviors. In order to improve the traceability of the event, the invention accurately positions the place where the event occurs through the photographing situation information of a plurality of photos provided by a plurality of people.
In order to realize the task, the invention adopts the following technical scheme:
an event positioning method based on mobile phone photographing behaviors comprises the following steps:
step 1, taking an event as a center, taking a plurality of photos from different directions by utilizing mobile phone photographing software to form a photo group, and recording situation information when the photos are taken;
step 2, roughly estimating the area of the event according to the shooting positions of the plurality of photos, and dividing the area into grids;
step 3, constructing a trapezoidal area from the place of the mobile phone when the mobile phone is used for photographing along the photographing direction according to the situation information of each photo, and then calculating the positioning weight of each grid covered by the trapezoidal area;
and 4, calculating the positioning weights of the grids covered by the trapezoidal area according to the method in the step 3 for all the photos in the photo group, then accumulating to obtain the accumulated positioning weights of all the grids, determining an event positioning grid set according to a set threshold value after normalization, and then determining the geographic coordinates of the event occurrence place.
Further, the context information in step 1 includes:
the photographing position coordinate triple pLoc is < lon, lat, er >, wherein lon represents longitude, lat represents latitude, and er represents geographic GPS positioning error radius;
calculating a value triple of the 3D accelerometer at the moment of photographing: accValue: the method comprises the following steps of (1) < accx, accy, accz >, wherein accx, accy and accz respectively represent the reading values of the 3D accelerometer on an x axis, a y axis and a z axis in a mobile phone coordinate system;
3D magnetometer triad at the moment of photographing: megValue: the mobile phone accelerometer comprises a mobile phone coordinate system, wherein the mobile phone coordinate system comprises a mobile phone, a 3D accelerometer and a mobile phone, wherein the mobile phone coordinate system comprises a mobile phone and the mobile phone is provided with a mobile phone, and the mobile phone is provided with a mobile phone coordinate system, wherein the mobile phone coordinate system comprises a mobile phone and a mobile phone control module, wherein the mobile phone coordinate system comprises a mobile phone and the mobile phone control module, wherein the mobile phone coordinate system comprises < megx, megy and megz >, and the megx, the megy and the megz respectively represent the reading values of an x axis, a y axis and a z axis of the 3D accelerometer in the mobile phone coordinate system.
Further, the process of establishing the trapezoidal area in step 3 includes:
mapping the GPS coordinates of the photo shooting point M (lon, lat) into a temporarily established Cartesian coordinate system, and constructing a trapezoidal area ABCD and a ray MT, wherein the positions of each vertex and end point T of the trapezoid are as follows:
A:M.x+er*Cos(π/2+θ),M.y+er*Sin(π/2+θ)
B:M.x+er*Cos(3*π/2+θ),M.y+er*Sin(3*π/2+θ)
C:M.x+los*Cos(3*π/2+θ),M.y+los*Sin(3*π/2+θ)
D:M.x+los*Cos(π/2+θ),M.y+los*Sin(π/2+θ)
T:M.x+los*Cos(θ),M.y+los*Sin(θ)
in the above formula, m.x is the coordinate of the shooting point M on the x axis in the cartesian coordinate system, m.y is the coordinate of the shooting point M on the y axis in the cartesian coordinate system, θ is the included angle between the direction aligned by the camera when the mobile phone takes a picture and the x axis of the world coordinate system along the counterclockwise rotation, er is the geographical GPS positioning error radius, and los is the maximum visible distance.
Further, the area where the event is located is divided into m grids in step 2, and the specific process of calculating the positioning weight of each grid in step 3 includes:
for each divided grid, judging whether the central point of the grid is covered by a trapezoidal area ABCD, if so, calculating the positioning weight w of the grid, wherein the weight calculation formula is as follows:
in the above two equations, point S is the perpendicular point of the grid center point R on the ray MT, point U is the intersection of the ray SR and the trapezoid boundary, len (RS) is the length of the line segment RS, len (US) is the length of the line segment US, σ =0.5
Further, the step of calculating the cumulative positioning weight in step 4 includes:
calculating the positioning weight w of all the n pictures in the picture group K after being divided into grids according to the method in the step 3 i , j ,w i , j Representing the positioning weight of the jth (j is less than or equal to m) grid obtained by calculation according to the ith (i is less than or equal to n) picture; then using all the photos in the photo group, the grid g can be obtained j The cumulative positioning weight of (a) is:
further, the process of determining the event location grid set in step 4 is as follows:
the calculation formula of the grid set LE is as follows:
LE={g j |if C′(g j )≥th}
in the above formula, th is a set threshold value, 0<th<1;C′(g i ) To accumulate the normalized results of the localization weights, the formula is as follows:
in the above formula, m is the number of grids.
Further, the process of determining the geographic coordinates of the event location in step 4 is as follows:
calculating the center of the grid set LE, namely the event positioning point EP (x, y), wherein the error radius is EPr, and the formula is as follows:
the x coordinate of the event anchor point is:
the y-coordinate of the event anchor point is:
the error radius is:
in the above formula, LE represents one grid in the grid set LE, le.x represents the x-axis coordinate of the grid LE, le.y represents the y-coordinate of the grid LE, | LE | represents the size of the grid set LE; max (le.x) represents the maximum value of x-axis coordinates of grid le in grid set, max (le.y) represents the maximum value of y-axis coordinates of grid le in grid set, min (le.x) represents the minimum value of x-axis coordinates of grid le in grid set, min (le.y) represents the minimum value of y-axis coordinates of grid le in grid set;
the EP (x, y) is back-transformed to GPS coordinates, i.e., geographical coordinates that are the event location.
The invention has the following technical characteristics:
1. the existing application only utilizes a large number of sensors embedded in a mobile phone to estimate the direction of alignment during photographing, but the invention can complete the positioning of an event by only utilizing the photographing directions of a plurality of photos.
2. The invention does not need a photographer to additionally open an application program or perform specific operation, only needs some pictures taken by the witness, and the pictures are originally intended to be taken by the witness, so that the burden of the witness is not additionally increased.
3. The positioning of the event can improve the credibility of the photo and provide information for police, media and public to find the scene of the event.
Drawings
FIG. 1 Mobile phone coordinate System;
FIG. 2 a world coordinate system;
FIG. 3 is a graph of a weight distribution function for a coverage area;
FIG. 4 is a diagram of a photo-based photo context calculation location ladder identifier;
FIG. 5 is an illustration of a graphical representation of location weight calculation and grid relationship;
FIG. 6 is an exemplary graph of event location results calculated based on multiple photographs;
Detailed Description
The invention utilizes the photographing behavior information of a plurality of persons (including the positions of the persons who take the pictures and the photographing directions of the persons) to position the positions of the events, and the basic idea is as follows: firstly, the position of a photographer is associated with the position of an event occurrence by using a photographing direction, then the event positioning solution space is reduced according to the photographing behavior habit and the photographing view-finding constraint, and the position of the event occurrence and the probability of the event occurrence are estimated according to each photo; and finally, determining the event position by adopting a probability accumulation method according to the positioning conclusion of the plurality of pictures. The meaning of the parameter symbols to be used in the scheme is shown in the following table 1:
TABLE 1 parameter symbol description Table
The method comprises the following specific steps:
1. terminal recording process
Step 1, taking a plurality of photos from different directions by using mobile phone photographing software (APP) with an event as a center to form a photo group K, and recording situation information when the photos are taken;
the event is taken as the center, namely the image information of a plurality of photos covers the actual position of the event in the process of taking the photos; obtaining the position and the photographing direction of the person taking a picture according to the situation information;
the APP refers to an application which can take pictures and has the use permission of acquiring each sensor and navigation module on the mobile phone; the context information is specifically:
1.1 photographing position coordinate triple pLoc: < lon, lat, er >, if the mobile phone does not turn on the GPS, the mobile phone signal tower is used for positioning; when a mobile phone signal tower is used for positioning, a parameter er in a triple represents a positioning error; the mobile phone positioning can adopt an open positioning interface API provided by Baidu;
1.2 triple of calculated values of the 3D accelerometer at the moment of photographing:
accValue:<accx,accy,accz>
the accx, accy and accz respectively represent the reading values of the 3D accelerometer on the x axis, the y axis and the z axis in the coordinate system of the mobile phone. The coordinate system of the mobile phone is shown in figure 1;
1.3 triple of 3D magnetometer at the moment of photographing:
megValue:<megx,megy,megz>
wherein the megx, the megy and the megz respectively represent the reading values of an x axis, a y axis and a z axis of the 3D magnetometer in a coordinate system of the mobile phone.
2. Data processing procedure
Step 2, estimating the area of the event according to the positions of the multiple photos, and dividing the area into grids;
and calculating the direction of the camera alignment when the mobile phone takes a picture, recording the direction as theta, wherein the theta represents the included angle formed by the direction of the camera alignment and the x axis of the world coordinate system along the anticlockwise rotation, and the unit is degree. The world coordinate system has an x-axis oriented in east and a y-axis oriented in north, as shown in fig. 2.
Here, it is assumed that the camera (front and back) of the mobile phone is perpendicular to the screen of the mobile phone, and the latter camera is taken as an example in the present scheme. The acquisition of theta can directly call the Android development API, and the codes are as follows: (omission of part of common sense code)
SensorManager.getRotationMatrix(rotate,null,accValue,megValue);
SensorManager.getOrientation(rotate,oriValue);
The direction triple oriValue = < pitch, roll, azimuth >, which respectively represents the rotation angle of the mobile phone around the x-axis, y-axis, and z-axis of the mobile phone coordinate system, as shown in fig. 1. Next, the θ value is obtained from oriValue, as follows:
the% in the above two formulas represents the remainder.
Because the photo is shot against the event occurrence position, the photo is aligned in the direction of the camera of the mobile phone during shooting and comprises the event occurrence position, all the shooting visual angles theta of the photo and shooting areas along the theta direction are placed in one picture, and the overlapped positions of the areas are approximate areas where the event is located; the area is gridded, with a side length of the grid of G (G =10 for a grid of 100 square meters) and a total number of grids of m.
Step 3, constructing a trapezoidal area from the place of the mobile phone along the photographing direction according to the situation information of each photo, and then calculating the positioning weight of each grid covered by the trapezoid; take any one photograph as an example:
step 3.1, assuming that the visible distance of the picture is los, namely the visible distance from the camera to the shot object, wherein los is approximately 45 meters under the general condition; the maximum angle of visibility of the camera shots is 2 x η.
Based on the prior art, the rotation angles alpha, beta and gamma of the mobile phone along the X axis, the Y axis and the Z axis can be obtained through calculation according to accValue and megValue. Taking a vertical photo of a mobile phone as an example, the photo-taking direction θ is calculated according to γ. The photographing direction θ here is an included angle between the projection of the direction pointed by the camera on the ground and the east direction along the counterclockwise direction.
Assuming that the photo is taken at point M (lon, lat), for simplicity of calculation, we map the GPS coordinates into a temporarily established cartesian coordinate system (the X-axis direction of the coordinate system coincides with the east direction, and the Y-axis direction coincides with the north direction), as shown in fig. 4, where the origin O is the GPS position of the first photo in the photo set and is denoted as OG (lon, lat). The position coordinates (x, y) of the other photographs in the cartesian coordinate system are calculated as follows:
dis _ lon =102834.74258026089786013677476285; // distance per longitude (meter)
dis _ lat =111712.69150641055729984301412873; // distance per latitude (meter)
x=dis_lon*(M.lon–OG.lon);
y=dis_lat*(M.lat–OG.lat);
The "-" operator appearing in the above equation represents information of parameters preceding the operator, e.g., m.lon represents longitude at point M, m.lat represents latitude at point M, og.lon represents longitude at OG (i.e., O), and og.lat represents latitude at OG; the following formula is the same as the above formula.
From the coordinates M (x, y) in the cartesian coordinate system after M transformation, the trapezoidal area ABCD and the ray MT are constructed, where the positions of the respective vertices and end points T of the trapezoid are:
A:M.x+er*Cos(π/2+θ),M.y+er*Sin(π/2+θ)
B:M.x+er*Cos(3*π/2+θ),M.y+er*Sin(3*π/2+θ)
C:M.x+los*Cos(3*π/2+θ),M.y+los*Sin(3*π/2+θ)
D:M.x+los*Cos(π/2+θ),M.y+los*Sin(π/2+θ)
T:M.x+los*Cos(θ),M.y+los*Sin(θ)
the positional relationship between the trapezoidal area and the shooting direction is shown in fig. 4.
And 3.2, finding the grids covered by the trapezoidal area, and calculating the positioning weight of each grid.
For each grid divided in step 2, firstly, judging whether the central point R of one grid Gr is covered by the trapezoidal area ABCD, if so, calculating the grid positioning weight w, wherein the calculation method of the weight is as follows:
step 3.2.1 calculate the positioning weight w of the point R according to the normal distribution function F (x), as shown in fig. 3. The variable x of F (x) is the ratio of the distance from the point R to the point MT to the distance from its corresponding boundary point U (the point on the closest oblique side of the trapezoidal region) to the point MT on the trapezoidal region, as shown in fig. 4, the point S is the perpendicular point of the point R on the point MT, and the value of x, i.e., the length ratio of the perpendicular line segment RS to the perpendicular line segment US, is partially coincident with the two perpendicular line segments. The concrete formula is as follows: (Len (RS) represents the length of the line segment RS)
Since most people will put a photographic subject in the middle of a photograph, σ =0.5 can be set.
Step 3.2.2 finds all the grids covered by the trapezoidal area, and calculates the positioning weight of each grid according to the method in step 3.2.1, as shown in fig. 6.
And 4, calculating the positioning weights of the grids covered by the trapezoidal area according to the method in the step 3 for all the photos in the photo group, then accumulating to obtain the accumulated positioning weights of all the grids, determining an event positioning grid set according to a set threshold value after normalization, and then determining the geographic coordinates of the event occurrence place.
Step 4.1, calculating grid positioning weights of all the photos in the photo group based on grid division:
calculating n picture groups K = { K) according to the method in step 3 1 ,k 2 ,…,k n All photos in the grid G = { G = } 1 ,g 2 ,…,g m Location weight w after i , j ,w i , j Reference step 3.2.1, w i , j The positioning weight of the jth (j is less than or equal to m) grid obtained by calculation according to the ith (i is less than or equal to n) photo is shown, and finally grid g can be obtained according to the n photos j The cumulative positioning weight of (a) is:
step 4.2 normalizing the positioning weight of the grid, the formula is as follows:
and 4.3, obtaining an event positioning grid set LE, namely an event place according to the threshold th. If LE is too large, or the elements within LE are too dispersed, the size of th can be adjusted, as shown in FIG. 6; the method defaults to 0-th cloth-cover 1, and the optimal th is more than 0.9. The calculation formula of the grid set LE is as follows:
LE={g j |if C′(g j )≥th}
step 4.4, calculating the center of the grid set LE, namely the center is an event positioning point EP (x, y), the error radius is EPr, and the formula is as follows:
the x coordinate of the event anchor point is:
the y-coordinate of the event anchor point is:
the error radius is:
in the above formula, LE represents one grid in the grid set LE, le.x represents the x-axis coordinate of grid LE, le.y represents the y-coordinate of grid LE, | LE | represents the size of grid set LE; max (le.x) represents the maximum value of x-axis coordinates of grid le in grid set, max (le.y) represents the maximum value of y-axis coordinates of grid le in grid set, min (le.x) represents the minimum value of x-axis coordinates of grid le in grid set, min (le.y) represents the minimum value of y-axis coordinates of grid le in grid set;
and 4.5, reversely transforming the EP (x, y) into a GPS coordinate, namely the geographic coordinate of the event positioning, wherein the EPr is the error radius.

Claims (7)

1. An event positioning method based on mobile phone photographing behavior is characterized by comprising the following steps:
step 1, taking an event as a center, taking a plurality of photos from different directions by utilizing mobile phone photographing software to form a photo group, and recording situation information when the photos are taken;
step 2, estimating an area where the event is located according to the shooting positions of the plurality of pictures, and dividing the area into grids;
step 3, constructing a trapezoidal area from the place of the mobile phone when the mobile phone is used for photographing along the photographing direction according to the situation information of each photo, and then calculating the positioning weight of each grid covered by the trapezoidal area;
and 4, calculating the positioning weights of the grids covered by the trapezoidal areas according to the method in the step 3 for all the photos in the photo group, then accumulating to obtain the accumulated positioning weights of all the grids, determining an event positioning grid set according to a set threshold value after normalization, and further determining the geographic coordinates of the event occurrence place.
2. The event positioning method based on mobile phone photographing behavior as claimed in claim 1, wherein the context information in step 1 comprises:
the photographing position coordinate triple pLoc is < lon, lat, er >, wherein lon represents longitude, lat represents latitude, and er represents geographic GPS positioning error radius;
calculating a value triple of the 3D accelerometer at the moment of photographing: accValue: the method comprises the following steps of (1) < accx, accy, accz >, wherein accx, accy and accz respectively represent the reading values of the 3D accelerometer on an x axis, a y axis and a z axis in a mobile phone coordinate system;
3D magnetometer triad at the moment of photographing: megValue: the mobile phone accelerometer comprises a mobile phone coordinate system, wherein the mobile phone coordinate system comprises a mobile phone, a 3D accelerometer and a mobile phone, wherein the mobile phone coordinate system comprises a mobile phone and the mobile phone is provided with a mobile phone, and the mobile phone is provided with a mobile phone coordinate system, wherein the mobile phone coordinate system comprises a mobile phone and a mobile phone control module, wherein the mobile phone coordinate system comprises a mobile phone and the mobile phone control module, wherein the mobile phone coordinate system comprises < megx, megy and megz >, and the megx, the megy and the megz respectively represent the reading values of an x axis, a y axis and a z axis of the 3D accelerometer in the mobile phone coordinate system.
3. The event positioning method based on mobile phone photographing behavior as claimed in claim 1 or 2, wherein the process of establishing the trapezoidal area in step 3 comprises:
mapping the GPS coordinates of the photo shooting point M (lon, lat) into a temporarily established Cartesian coordinate system, and constructing a trapezoidal area ABCD and a ray MT, wherein the positions of each vertex and end point T of the trapezoid are as follows:
A:M.x+er*Cos(π/2+θ),M.y+er*Sin(π/2+θ)
B:M.x+er*Cos(3*π/2+θ),M.y+er*Sin(3*π/2+θ)
C:M.x+los*Cos(3*π/2+θ),M.y+los*Sin(3*π/2+θ)
D:M.x+los*Cos(π/2+θ),M.y+los*Sin(π/2+θ)
T:M.x+los*Cos(θ),M.y+los*Sin(θ)
in the above formula, m.x is the coordinate of the shooting point M on the x axis of the cartesian coordinate system, m.y is the coordinate of the shooting point M on the y axis of the cartesian coordinate system, θ is the included angle between the direction in which the camera is aligned when the mobile phone takes a picture and the x axis of the world coordinate system rotating counterclockwise, er is the geographical GPS positioning error radius, and los is the maximum visible distance.
4. The event positioning method based on mobile phone photographing behavior as defined in claim 3, wherein the area where the event is located in step 2 is divided into m grids, and the specific process of calculating the positioning weight of each grid in step 3 comprises:
for each divided grid, judging whether the central point of the grid is covered by the trapezoidal area ABCD, if so, calculating the positioning weight w of the grid, wherein the weight calculation formula is as follows:
in the above two equations, point S is the perpendicular point of the grid center point R on the ray MT, point U is the intersection of the ray SR and the trapezoid boundary, len (RS) is the length of the line segment RS, len (US) is the length of the line segment US, and σ =0.5.
5. The event positioning method based on mobile phone photographing behavior as claimed in claim 4, wherein the step of calculating the accumulated positioning weight in step 4 comprises:
calculating the positioning weight w of all n photos in the photo group K after the n photos are divided into grids according to the method in the step 3 i , j ,w i,j Is calculated according to the ith (i is less than or equal to n) pictureThe positioning weight of the jth (j is less than or equal to m) grid; then using all the photos in the photo group, the grid g can be obtained j The cumulative positioning weight of (a) is:
6. the event positioning method based on mobile phone photographing behavior as claimed in claim 5, wherein the process of determining the event positioning grid set in step 4 is as follows:
the calculation formula of the grid set LE is as follows:
LE={g j |if C′(g j )≥th}
in the above formula, th is a set threshold value, 0<th<1;C′(g i ) To accumulate the normalized results of the localization weights, the formula is as follows:
in the above formula, m is the number of grids.
7. The event positioning method based on mobile phone photographing behavior as claimed in claim 6, wherein the process of determining the geographic coordinates of event positioning in step 4 is:
calculating the center of the grid set LE, namely the event positioning point EP (x, y), wherein the error radius is EPr, and the formula is as follows:
the x coordinate of the event anchor point is:
the y-coordinate of the event anchor point is:
the error radius EPr is:
in the above formula, LE represents one grid in the grid set LE, le.x represents the x-axis coordinate of grid LE, le.y represents the y-coordinate of grid LE, | LE | represents the size of grid set LE; max (le.x) represents the maximum value of x-axis coordinates of a grid le in a grid set, max (le.y) represents the maximum value of y-axis coordinates of the grid le in the grid set, min (le.x) represents the minimum value of x-axis coordinates of the grid le in the grid set, and Min (le.y) represents the minimum value of y-axis coordinates of the grid le in the grid set;
the EP (x, y) is back-transformed to GPS coordinates, i.e., geographical coordinates that are the event location.
CN201510394428.3A 2015-07-07 2015-07-07 A kind of state event location method based on mobile phone photograph behavior Active CN105045582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510394428.3A CN105045582B (en) 2015-07-07 2015-07-07 A kind of state event location method based on mobile phone photograph behavior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510394428.3A CN105045582B (en) 2015-07-07 2015-07-07 A kind of state event location method based on mobile phone photograph behavior

Publications (2)

Publication Number Publication Date
CN105045582A CN105045582A (en) 2015-11-11
CN105045582B true CN105045582B (en) 2018-01-02

Family

ID=54452147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510394428.3A Active CN105045582B (en) 2015-07-07 2015-07-07 A kind of state event location method based on mobile phone photograph behavior

Country Status (1)

Country Link
CN (1) CN105045582B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105509716B (en) * 2015-11-26 2018-03-27 武大吉奥信息技术有限公司 A kind of geographical information collection method and device based on augmented reality
CN105890597B (en) * 2016-04-07 2019-01-01 浙江漫思网络科技有限公司 A kind of assisted location method based on image analysis
CN113188439B (en) * 2021-04-01 2022-08-12 深圳市磐锋精密技术有限公司 Internet-based automatic positioning method for mobile phone camera shooting

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102113006A (en) * 2008-06-20 2011-06-29 金汉俊 Control method of geographic information system and mobile terminal
CN102799890A (en) * 2011-05-24 2012-11-28 佳能株式会社 Image clustering method
CN103473926A (en) * 2013-09-11 2013-12-25 无锡加视诚智能科技有限公司 Gun-ball linkage road traffic parameter collection and rule breaking snapshooting system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8831352B2 (en) * 2011-04-04 2014-09-09 Microsoft Corporation Event determination from photos

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102113006A (en) * 2008-06-20 2011-06-29 金汉俊 Control method of geographic information system and mobile terminal
CN102799890A (en) * 2011-05-24 2012-11-28 佳能株式会社 Image clustering method
CN103473926A (en) * 2013-09-11 2013-12-25 无锡加视诚智能科技有限公司 Gun-ball linkage road traffic parameter collection and rule breaking snapshooting system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
If You See Something,Swipe towards It:Crowdsourced Event Localization Using Smartphones;Robin Wentao Ouyang;《UbiComp2013》;20130912;第23页-32页 *

Also Published As

Publication number Publication date
CN105045582A (en) 2015-11-11

Similar Documents

Publication Publication Date Title
TWI556198B (en) Positioning and directing data analysis system and method thereof
CN112634370A (en) Unmanned aerial vehicle dotting method, device, equipment and storage medium
CN111982291A (en) Fire point positioning method, device and system based on unmanned aerial vehicle
CN105045582B (en) A kind of state event location method based on mobile phone photograph behavior
JP2016018463A (en) State change management system and state change management method
CN104284155A (en) Video image information labeling method and device
CN110335351A (en) Multi-modal AR processing method, device, system, equipment and readable storage medium storing program for executing
JP2019060754A (en) Cloud altitude and wind velocity measurement method using optical image
CN110720023B (en) Method and device for processing parameters of camera and image processing equipment
CN113034347B (en) Oblique photography image processing method, device, processing equipment and storage medium
CN110099206B (en) Robot-based photographing method, robot and computer-readable storage medium
CN111582296B (en) Remote sensing image comprehensive matching method and device, electronic equipment and storage medium
Abrams et al. Webcams in context: Web interfaces to create live 3D environments
CN107291717B (en) Method and device for determining position of interest point
CN112419739A (en) Vehicle positioning method and device and electronic equipment
US20150142310A1 (en) Self-position measuring terminal
CN104539927A (en) Distance determination method and equipment
JP6941212B2 (en) Image analysis system, image analysis method, and image analysis program
US20210409902A1 (en) Localization by using skyline data
US20230314171A1 (en) Mapping apparatus, tracker, mapping method, and program
CN113650783A (en) Fixed wing oblique photography cadastral mapping method, system and equipment
CN107703954B (en) Target position surveying method and device for unmanned aerial vehicle and unmanned aerial vehicle
US20130128040A1 (en) System and method for aligning cameras
JP6643858B2 (en) Projection system, projection method, and program
CN112559786B (en) Method and device for determining imaging time of optical remote sensing image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant