CN113188439A - Internet-based automatic positioning method for mobile phone camera shooting - Google Patents

Internet-based automatic positioning method for mobile phone camera shooting Download PDF

Info

Publication number
CN113188439A
CN113188439A CN202110358196.1A CN202110358196A CN113188439A CN 113188439 A CN113188439 A CN 113188439A CN 202110358196 A CN202110358196 A CN 202110358196A CN 113188439 A CN113188439 A CN 113188439A
Authority
CN
China
Prior art keywords
data
scene
value
height
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110358196.1A
Other languages
Chinese (zh)
Other versions
CN113188439B (en
Inventor
何杰
欧文灏
鲁伟
李文科
曾风平
徐思通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Panfeng Precision Technology Co Ltd
Original Assignee
Shenzhen Panfeng Precision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Panfeng Precision Technology Co Ltd filed Critical Shenzhen Panfeng Precision Technology Co Ltd
Priority to CN202110358196.1A priority Critical patent/CN113188439B/en
Publication of CN113188439A publication Critical patent/CN113188439A/en
Application granted granted Critical
Publication of CN113188439B publication Critical patent/CN113188439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates

Abstract

The invention discloses an automatic positioning method for mobile phone camera shooting based on the internet, which comprises a camera, an identification unit, an analysis unit, a judgment unit, a database, a sending unit and intelligent equipment, and comprises the following specific steps: the method comprises the following steps: the user shoots the scene video of place through the camera to demarcate the scene video of shooing as scene image information, transmit scene image information to the identification element, step two: record information is stored in the database; the invention obtains the definition data, the distance value corresponding to the definition and the influence factor of the distance data on the definition from the database through the orientation processing unit, and carries out orientation processing operation on the definition data, the mobile phone area position data, the recorded orientation data, the scenery position data, the proportion value and the recorded image data in the judging unit to obtain the mobile phone position data, further accurately position the mobile phone, determine the shooting place and improve the accuracy of data positioning analysis.

Description

Internet-based automatic positioning method for mobile phone camera shooting
Technical Field
The invention relates to the technical field of automatic positioning of mobile phone camera shooting, in particular to an automatic positioning method for mobile phone camera shooting based on the Internet.
Background
Photography and video shooting are both understood by definition and divided by application, and belong to two theoretical levels of systems, and photography is understood by definition to refer to a process of recording images by using some special equipment, and generally, a mechanical camera or a digital camera is used for photography.
At present, when people use a mobile phone to shoot, the shooting place and the shooting angle cannot be automatically positioned, so that people need an auxiliary software to manually position while shooting, and a great amount of time is consumed;
therefore, an automatic positioning method for mobile phone camera shooting based on the Internet is provided.
Disclosure of Invention
The invention aims to provide an automatic positioning method for mobile phone camera shooting based on the internet.A user shoots a scene video of a location through a camera, marks the shot scene video as scene image information and transmits the scene image information to an identification unit; the identification unit acquires the recorded information from the database, performs identification operation on the recorded information and the scene image information together, quickly identifies the scene screen shot by the camera according to the related data stored in the database, and matches the related data, so that the identification accuracy is improved, and the working efficiency is improved;
the image analysis operation is carried out on the image data to be determined, the recording azimuth data, the recording position data, the scene height data, the scene width data and the scene image information through the arrangement of the analysis unit to obtain a proportional value, the scene height data, the scene width data, an X-axis difference value and a Y-axis difference value, and the proportional value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value are transmitted to the judgment unit together;
through the setting of the judging unit, judging operation is carried out on the comparison value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value, and the data are accurately analyzed, so that the large-range position of the corresponding mobile phone is obtained, the judging accuracy is improved, and the persuasion force is improved;
additionally, the definition data, the distance value corresponding to the definition and the influence factor of the distance data on the definition are obtained from the database through the orientation processing unit, and the orientation processing operation is carried out on the definition data, the mobile phone area position data, the recorded orientation data, the scenery position data, the proportion value and the recorded image data in the judging unit, so that the mobile phone position data is obtained, the position data of the mobile phone is further accurate, the shooting place is determined, and the accuracy of data positioning analysis is improved.
The purpose of the invention can be realized by the following technical scheme:
an automatic positioning method for mobile phone camera shooting based on the internet comprises a camera, an identification unit, an analysis unit, a judgment unit, a database, an orientation processing unit, a sending unit and intelligent equipment, and the method comprises the following specific steps:
the method comprises the following steps: a user shoots a scene video of a location through a camera, marks the shot scene video as scene image information, and transmits the scene image information to an identification unit;
step two: the database stores record information, the identification unit acquires the record information from the database, performs identification operation on the record information and the scene image information to obtain undetermined image data, record azimuth data, record position data, scene height data, scene width data and scene image information, and transmits the undetermined image data, the record azimuth data, the record position data, the scene height data, the scene width data and the scene image information to the analysis unit;
step three: through the arrangement of the analysis unit, image analysis operation is carried out on image data to be determined, recorded azimuth data, recorded position data, scene height data, scene width data and scene image information to obtain a proportional value, the scene height data, the scene width data, an X-axis difference value and a Y-axis difference value, and the proportional value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value are transmitted to the judgment unit together;
step four: through the setting of the judging unit, judging operation is carried out on the comparison value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value to obtain the position data of the mobile phone area, and the position data of the mobile phone area is transmitted to the intelligent equipment through the transmitting unit;
step five: the intelligent equipment receives the position data of the mobile phone area and carries out positioning display on the position data;
step six: the database also stores definition numerical values of the mobile phone, distance values corresponding to the definition and influence factors of the distance data on the definition, the definition data, the distance values corresponding to the definition and the influence factors of the distance data on the definition are obtained from the database through the orientation processing unit, and the orientation processing operation is carried out on the definition data, the recorded orientation data, the scene position data, the proportional value and the recorded image data in the judgment unit, so that the mobile phone position data is obtained and is transmitted to the intelligent equipment;
step seven: and the intelligent equipment receives and displays the position data of the mobile phone.
As a further improvement of the invention: the specific operation process of the identification operation comprises the following steps:
k1: acquiring recording information, marking the scenery all over the world in the recording information as recording image data, marking the viewing direction corresponding to the recording image data in the recording information as recording direction data, marking the region position corresponding to the recording direction data in the recording information as recording position data, marking the position of the scenery in the recording information as scenery position data, marking the height of the scenery in the recording information as scenery height data, and marking the width of the scenery in the recording information as scenery width data;
k2: the method comprises the steps of obtaining scene image information, matching the scene image information with recorded image data, selecting recorded image data with high matching similarity, calibrating the recorded image data into undetermined image data, and extracting recorded azimuth data, recorded position data, scene height data and scene width data corresponding to the undetermined image data.
As a further improvement of the invention: the specific operation process of the image analysis operation is as follows:
h1: obtaining scene image information, establishing a virtual space rectangular coordinate system, and calibrating each corner point of a scene in the scene image information in the virtual space rectangular coordinate system to obtain a plurality of corner coordinate points;
h2: obtaining scene image information in the virtual space rectangular coordinate system in H1, and performing plane processing on the scene image information, namely performing one-surface scanning on a three-dimensional image in a photo form, extracting a plurality of corresponding corner points, calibrating the corner points as plane corner points, and independently extracting coordinate values of an X axis and a Y axis corresponding to the plane corner points, sequencing the coordinate values of the X axis and the Y axis from large to small to obtain a maximum value and a minimum value respectively corresponding to the X axis and the Y axis, performing difference calculation on the maximum value and the minimum value of the coordinate values of the X axis to obtain an X-axis difference value, and performing difference calculation on the maximum value and the minimum value of the coordinate values of the Y axis to obtain a Y-axis difference value;
h3: obtaining the scene height data and the scene width data corresponding to the image data to be determined, and comparing the scene height data and the scene width data with the X-axis difference value and the Y-axis difference value respectively, wherein the method specifically comprises the following steps: the scene height data and the Y-axis difference are substituted into the calculation: height ratio = scene height data/Y-axis difference, the scene width data and X-axis difference are substituted into the calculation: width ratio = scene width data/X-axis difference;
h4: extracting the height ratio and the width ratio of the H3, and substituting the height ratio and the width ratio into a mean value calculation formula for mean value calculation, specifically: calculating the sum of the height ratio and the width ratio, and dividing the sum of the height ratio and the width ratio by two to obtain a comparison mean value;
h5: according to the comparison mean calculation mode in H2-H4, the comparison mean corresponding to each image data to be determined is calculated, the comparison mean is subjected to mean calculation, a proportional value is obtained, and the proportional value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value are extracted.
As a further improvement of the invention: the specific operation process of the judgment operation is as follows:
g1: obtaining a proportional value, an X-axis difference value and a Y-axis difference value, and calculating actual values of the proportional value, the X-axis difference value and the Y-axis difference value, wherein the actual values are as follows:
s1: the ratio value is brought into the calculation with the X-axis difference: actual width value = ratio value X axis difference;
s2: the ratio value is brought into the calculation with the Y-axis difference: actual height value = scale value x Y-axis difference;
g2: extracting the actual height value and the actual width value in the G1, performing difference calculation on the scene height data and the scene width data corresponding to the undetermined image data respectively, calculating the difference between the scene height data and the actual height data corresponding to each undetermined image data, calibrating the difference as an imaginary height difference, calibrating the difference between each image data to be determined and the actual width data as an imaginary width difference, and performing average calculation on the imaginary height difference and the imaginary width difference to obtain a plurality of imaginary and real differences;
g3: and sequencing the virtual and real difference values from small to large to obtain virtual and real sequencing data, selecting the first virtual and real difference value in the sequencing, extracting corresponding scene position data, and calibrating the scene position data as the position data of the mobile phone area.
As a further improvement of the invention: the specific operation process of the orientation processing operation is as follows:
e1: acquiring recorded image data, identifying the definition of the recorded image data, and marking the identified definition as an actual definition numerical value;
e2: extracting actual definition numerical values, performing difference calculation on the actual definition numerical values and the definition numerical values to obtain definition difference values, and bringing the definition difference values and distance values corresponding to the definition and the influence factors of the distance data on the definition into a calculation formula together: the total distance value = definition difference value + influence factor of distance data on definition + distance value corresponding to definition;
e3: the total distance value in E2 above is extracted and substituted into the calculation with the ratio: the image distance = total distance value x proportional value, wherein the image distance represents a distance value of an image shot by the mobile phone from the scenery;
e4: establishing a plane rectangular coordinate system, marking the scene position data in the plane rectangular coordinate system, marking the position shot by the mobile phone at the marked position in the plane rectangular coordinate system according to the azimuth data, the scene position data and the image distance data, and marking the position as the mobile phone position data;
e5: and (4) picking up the position data of the mobile phone, and carrying out actual size conversion according to the scene position data, the image distance data and the proportional value, namely carrying out on-site calibration on the position in the mobile phone image, and calibrating the point as the position data of the mobile phone.
The invention has the beneficial effects that:
(1) a user shoots a scene video of a location through a camera, marks the shot scene video as scene image information, and transmits the scene image information to an identification unit; the identification unit acquires the recorded information from the database, performs identification operation on the recorded information and the scene image information together, quickly identifies the scene screen shot by the camera according to the related data stored in the database, and matches the related data, so that the identification accuracy is improved, and the working efficiency is improved.
(2) Through the arrangement of the analysis unit, image analysis operation is carried out on image data to be determined, recorded azimuth data, recorded position data, scene height data, scene width data and scene image information to obtain a proportional value, the scene height data, the scene width data, an X-axis difference value and a Y-axis difference value, and the proportional value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value are transmitted to the judgment unit together; through the setting of the judging unit, judging operation is carried out on the comparison value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value to obtain the position data of the mobile phone area, and the position data of the mobile phone area is transmitted to the intelligent equipment through the transmitting unit; the data are accurately analyzed, so that the position of the corresponding mobile phone in a large range is obtained, the judgment accuracy is improved, and the persuasion is improved.
(3) The definition data, the distance value corresponding to the definition and the influence factor of the distance data on the definition are obtained from the database through the orientation processing unit, orientation processing operation is carried out on the definition data, the mobile phone area position data, the recorded orientation data, the scenery position data, the proportion value and the recorded image data in the judging unit, the mobile phone position data are obtained, the position data of the mobile phone are further accurate, the shooting place is determined, and the accuracy of data positioning analysis is improved.
Drawings
The invention will be further described with reference to the accompanying drawings;
FIG. 1 is a system block diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention is an internet-based automatic positioning method for mobile phone camera shooting, which includes a camera, an identification unit, an analysis unit, a determination unit, a database, an orientation processing unit, a sending unit and an intelligent device, and the method includes the following steps:
the method comprises the following steps: a user shoots a scene video of a location through a camera, marks the shot scene video as scene image information, and transmits the scene image information to an identification unit;
step two: the database stores record information, the identification unit acquires the record information from the database and performs identification operation together with the scene image information, and the specific operation process of the identification operation is as follows:
k1: acquiring recording information, marking the scenery all over the world in the recording information as recording image data, marking the viewing direction corresponding to the recording image data in the recording information as recording direction data, marking the region position corresponding to the recording direction data in the recording information as recording position data, marking the position of the scenery in the recording information as scenery position data, marking the height of the scenery in the recording information as scenery height data, and marking the width of the scenery in the recording information as scenery width data;
k2: acquiring scene image information, matching the scene image information with recorded image data, selecting recorded image data with high matching similarity, calibrating the recorded image data into undetermined image data, and extracting recorded azimuth data, recorded position data, scene height data and scene width data corresponding to the undetermined image data;
k3: the undetermined image data extracted from the K2, corresponding recording azimuth data, recording position data, scene height data and scene width data are transmitted to an analysis unit together with the scene image information;
step three: through the setting of the analysis unit, image analysis operation is carried out on image data to be determined, recorded azimuth data, recorded position data, scene height data, scene width data and scene image information, and the specific operation process of the image analysis operation is as follows:
h1: obtaining scene image information, establishing a virtual space rectangular coordinate system, and calibrating each corner point of a scene in the scene image information in the virtual space rectangular coordinate system to obtain a plurality of corner coordinate points;
h2: obtaining scene image information in the virtual space rectangular coordinate system in H1, and performing plane processing on the scene image information, namely performing one-surface scanning on a three-dimensional image in a photo form, extracting a plurality of corresponding corner points, calibrating the corner points as plane corner points, and independently extracting coordinate values of an X axis and a Y axis corresponding to the plane corner points, sequencing the coordinate values of the X axis and the Y axis from large to small to obtain a maximum value and a minimum value respectively corresponding to the X axis and the Y axis, performing difference calculation on the maximum value and the minimum value of the coordinate values of the X axis to obtain an X-axis difference value, and performing difference calculation on the maximum value and the minimum value of the coordinate values of the Y axis to obtain a Y-axis difference value;
h3: obtaining the scene height data and the scene width data corresponding to the image data to be determined, and comparing the scene height data and the scene width data with the X-axis difference value and the Y-axis difference value respectively, wherein the method specifically comprises the following steps: the scene height data and the Y-axis difference are substituted into the calculation: height ratio = scene height data/Y-axis difference, the scene width data and X-axis difference are substituted into the calculation: width ratio = scene width data/X-axis difference;
h4: extracting the height ratio and the width ratio of the H3, and substituting the height ratio and the width ratio into a mean value calculation formula for mean value calculation, specifically: calculating the sum of the height ratio and the width ratio, and dividing the sum of the height ratio and the width ratio by two to obtain a comparison mean value;
h5: calculating a contrast mean value corresponding to each to-be-determined image data according to the contrast mean value calculation mode in H2-H4, performing mean value calculation on the contrast mean values to obtain a proportional value, extracting the proportional value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value, and transmitting the proportional value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value to a judgment unit;
step four: through the setting of the judging unit, the judging operation is carried out on the scale value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value, and the specific operation process of the judging operation is as follows:
g1: obtaining a proportional value, an X-axis difference value and a Y-axis difference value, and calculating actual values of the proportional value, the X-axis difference value and the Y-axis difference value, wherein the actual values are as follows:
s1: the ratio value is brought into the calculation with the X-axis difference: actual width value = ratio value X axis difference;
s2: the ratio value is brought into the calculation with the Y-axis difference: actual height value = scale value x Y-axis difference;
g2: extracting the actual height value and the actual width value in the G1, performing difference calculation on the scene height data and the scene width data corresponding to the undetermined image data respectively, calculating the difference between the scene height data and the actual height data corresponding to each undetermined image data, calibrating the difference as an imaginary height difference, calibrating the difference between each image data to be determined and the actual width data as an imaginary width difference, and performing average calculation on the imaginary height difference and the imaginary width difference to obtain a plurality of imaginary and real differences;
g3: sorting the virtual and real difference values from small to large to obtain virtual and real sorting data, selecting the first virtual and real difference value in the sorting, extracting corresponding scene position data, and calibrating the scene position data as mobile phone area position data;
g4: transmitting the position data of the mobile phone area to the intelligent equipment through the transmitting unit;
step five: the intelligent equipment receives the position data of the mobile phone area and carries out positioning display on the position data;
step six: the definition numerical value of cell-phone, the distance value that the definition corresponds, the influence factor of distance data to the definition still stores in the database, acquire definition data from the database through position processing unit, distance value and the influence factor of distance data to the definition that the definition corresponds to carry out the position processing operation with its cell-phone regional position data in with the decision unit, record position data, scenery position data, proportional value and record image data, the concrete operation process of position processing operation is:
e1: acquiring recorded image data, identifying the definition of the recorded image data, and marking the identified definition as an actual definition numerical value;
e2: extracting actual definition numerical values, performing difference calculation on the actual definition numerical values and the definition numerical values to obtain definition difference values, and bringing the definition difference values and distance values corresponding to the definition and the influence factors of the distance data on the definition into a calculation formula together: the total distance value = definition difference value + influence factor of distance data on definition + distance value corresponding to definition;
e3: the total distance value in E2 above is extracted and substituted into the calculation with the ratio: the image distance = total distance value x proportional value, wherein the image distance represents a distance value of an image shot by the mobile phone from the scenery;
e4: establishing a plane rectangular coordinate system, marking the scene position data in the plane rectangular coordinate system, marking the position shot by the mobile phone at the marked position in the plane rectangular coordinate system according to the azimuth data, the scene position data and the image distance data, and marking the position as the mobile phone position data;
e5: picking up the position data of the mobile phone, and carrying out actual size conversion according to the scene position data, the image distance data and the proportional value, namely carrying out on-site calibration on the position in the mobile phone image and calibrating the point as the position data of the mobile phone;
e6: extracting the position data of the mobile phone and transmitting the position data to the intelligent equipment;
step seven: and the intelligent equipment receives and displays the position data of the mobile phone.
When the system works, a user shoots a scene video of a place through the camera, the shot scene video is calibrated to be scene image information, and the scene image information is transmitted to the identification unit; the system comprises a database, an identification unit, an analysis unit, a judgment unit, a storage unit and a display unit, wherein the database stores record information, the identification unit acquires the record information from the database, performs identification operation together with scene image information to obtain undetermined image data, record azimuth data, record position data, scene height data, scene width data and scene image information, and transmits the undetermined image data, the record azimuth data, the record position data, the scene height data, the scene width data and the scene image information to the analysis unit; through the setting of the judging unit, judging operation is carried out on the comparison value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value to obtain the position data of the mobile phone area, and the position data of the mobile phone area is transmitted to the intelligent equipment through the transmitting unit; the database also stores definition numerical values of the mobile phone, distance values corresponding to the definition and influence factors of the distance data on the definition, the definition data, the distance values corresponding to the definition and the influence factors of the distance data on the definition are obtained from the database through the orientation processing unit, and the orientation processing operation is carried out on the definition data, the recorded orientation data, the scene position data, the proportional value and the recorded image data in the judgment unit, so that the mobile phone position data is obtained and is transmitted to the intelligent equipment; and the intelligent equipment receives and displays the mobile phone area position data and the mobile phone position data.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.

Claims (4)

1. An automatic positioning method for mobile phone camera shooting based on the internet is characterized by comprising a camera, a recognition unit, an analysis unit, a judgment unit, a database, a sending unit and intelligent equipment, and the method comprises the following specific steps:
the method comprises the following steps: a user shoots a scene video of a location through a camera, marks the shot scene video as scene image information, and transmits the scene image information to an identification unit;
step two: the database stores record information, the identification unit acquires the record information from the database, performs identification operation on the record information and the scene image information to obtain undetermined image data, record azimuth data, record position data, scene height data, scene width data and scene image information, and transmits the undetermined image data, the record azimuth data, the record position data, the scene height data, the scene width data and the scene image information to the analysis unit;
step three: through the arrangement of the analysis unit, image analysis operation is carried out on image data to be determined, recorded azimuth data, recorded position data, scene height data, scene width data and scene image information to obtain a proportional value, the scene height data, the scene width data, an X-axis difference value and a Y-axis difference value, and the proportional value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value are transmitted to the judgment unit together;
step four: through the setting of the judging unit, judging operation is carried out on the comparison value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value to obtain the position data of the mobile phone area, and the position data of the mobile phone area is transmitted to the intelligent equipment through the transmitting unit;
step five: and the intelligent equipment receives the position data of the mobile phone area and carries out positioning display on the data.
2. The automatic positioning method for mobile phone camera shooting based on the internet as claimed in claim 1, wherein the specific operation process of the recognition operation is as follows:
k1: acquiring recording information, marking the scenery all over the world in the recording information as recording image data, marking the viewing direction corresponding to the recording image data in the recording information as recording direction data, marking the region position corresponding to the recording direction data in the recording information as recording position data, marking the position of the scenery in the recording information as scenery position data, marking the height of the scenery in the recording information as scenery height data, and marking the width of the scenery in the recording information as scenery width data;
k2: the method comprises the steps of obtaining scene image information, matching the scene image information with recorded image data, selecting recorded image data with high matching similarity, calibrating the recorded image data into undetermined image data, and extracting recorded azimuth data, recorded position data, scene height data and scene width data corresponding to the undetermined image data.
3. The automatic positioning method for mobile phone camera shooting based on the internet as claimed in claim 1, wherein the specific operation process of the image analysis operation is as follows:
h1: obtaining scene image information, establishing a virtual space rectangular coordinate system, and calibrating each corner point of a scene in the scene image information in the virtual space rectangular coordinate system to obtain a plurality of corner coordinate points;
h2: obtaining scene image information in the virtual space rectangular coordinate system in H1, and performing plane processing on the scene image information, namely performing one-surface scanning on a three-dimensional image in a photo form, extracting a plurality of corresponding corner points, calibrating the corner points as plane corner points, and independently extracting coordinate values of an X axis and a Y axis corresponding to the plane corner points, sequencing the coordinate values of the X axis and the Y axis from large to small to obtain a maximum value and a minimum value respectively corresponding to the X axis and the Y axis, performing difference calculation on the maximum value and the minimum value of the coordinate values of the X axis to obtain an X-axis difference value, and performing difference calculation on the maximum value and the minimum value of the coordinate values of the Y axis to obtain a Y-axis difference value;
h3: obtaining the scene height data and the scene width data corresponding to the image data to be determined, and comparing the scene height data and the scene width data with the X-axis difference value and the Y-axis difference value respectively, wherein the method specifically comprises the following steps: the scene height data and the Y-axis difference are substituted into the calculation: height ratio = scene height data/Y-axis difference, the scene width data and X-axis difference are substituted into the calculation: width ratio = scene width data/X-axis difference;
h4: extracting the height ratio and the width ratio of the H3, and substituting the height ratio and the width ratio into a mean value calculation formula for mean value calculation, specifically: calculating the sum of the height ratio and the width ratio, and dividing the sum of the height ratio and the width ratio by two to obtain a comparison mean value;
h5: according to the comparison mean calculation mode in H2-H4, the comparison mean corresponding to each image data to be determined is calculated, the comparison mean is subjected to mean calculation, a proportional value is obtained, and the proportional value, the scene height data, the scene width data, the X-axis difference value and the Y-axis difference value are extracted.
4. The automatic positioning method for mobile phone camera shooting based on the internet as claimed in claim 1, wherein the specific operation process of the judgment operation is as follows:
g1: obtaining a proportional value, an X-axis difference value and a Y-axis difference value, and calculating actual values of the proportional value, the X-axis difference value and the Y-axis difference value, wherein the actual values are as follows:
s1: the ratio value is brought into the calculation with the X-axis difference: actual width value = ratio value X axis difference;
s2: the ratio value is brought into the calculation with the Y-axis difference: actual height value = scale value x Y-axis difference;
g2: extracting the actual height value and the actual width value in the G1, performing difference calculation on the scene height data and the scene width data corresponding to the undetermined image data respectively, calculating the difference between the scene height data and the actual height data corresponding to each undetermined image data, calibrating the difference as an imaginary height difference, calibrating the difference between each image data to be determined and the actual width data as an imaginary width difference, and performing average calculation on the imaginary height difference and the imaginary width difference to obtain a plurality of imaginary and real differences;
g3: and sequencing the virtual and real difference values from small to large to obtain virtual and real sequencing data, selecting the first virtual and real difference value in the sequencing, extracting corresponding scene position data, and calibrating the scene position data as the position data of the mobile phone area.
CN202110358196.1A 2021-04-01 2021-04-01 Internet-based automatic positioning method for mobile phone camera shooting Active CN113188439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110358196.1A CN113188439B (en) 2021-04-01 2021-04-01 Internet-based automatic positioning method for mobile phone camera shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110358196.1A CN113188439B (en) 2021-04-01 2021-04-01 Internet-based automatic positioning method for mobile phone camera shooting

Publications (2)

Publication Number Publication Date
CN113188439A true CN113188439A (en) 2021-07-30
CN113188439B CN113188439B (en) 2022-08-12

Family

ID=76974510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110358196.1A Active CN113188439B (en) 2021-04-01 2021-04-01 Internet-based automatic positioning method for mobile phone camera shooting

Country Status (1)

Country Link
CN (1) CN113188439B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002314994A (en) * 2001-04-13 2002-10-25 Matsushita Electric Ind Co Ltd System and method for estimating camera position
JP2004020398A (en) * 2002-06-18 2004-01-22 Nippon Telegr & Teleph Corp <Ntt> Method, device, and program for acquiring spatial information and recording medium recording program
CN104157020A (en) * 2014-08-12 2014-11-19 广州中国科学院沈阳自动化研究所分所 Hand-held intelligent inspection terminal
US20150206337A1 (en) * 2014-01-17 2015-07-23 Nokia Corporation Method and apparatus for visualization of geo-located media contents in 3d rendering applications
CN105045582A (en) * 2015-07-07 2015-11-11 西北工业大学 Mobile phone photographing behavior based event positioning method
CN106101689A (en) * 2016-06-13 2016-11-09 西安电子科技大学 Utilize the method that mobile phone monocular cam carries out augmented reality to virtual reality glasses
WO2018142533A1 (en) * 2017-02-02 2018-08-09 三菱電機株式会社 Position/orientation estimating device and position/orientation estimating method
CN109059895A (en) * 2018-03-28 2018-12-21 南京航空航天大学 A kind of multi-modal indoor ranging and localization method based on mobile phone camera and sensor
CN109520500A (en) * 2018-10-19 2019-03-26 南京航空航天大学 One kind is based on the matched accurate positioning of terminal shooting image and streetscape library acquisition method
CN109523592A (en) * 2018-10-19 2019-03-26 天津大学 A kind of interior flame localization method based on camera
CN110969792A (en) * 2019-11-04 2020-04-07 苏州再生宝智能物联科技有限公司 Intelligent anti-theft monitoring system based on Internet of things
CN110996003A (en) * 2019-12-16 2020-04-10 Tcl移动通信科技(宁波)有限公司 Photographing positioning method and device and mobile terminal
CN111814663A (en) * 2020-07-07 2020-10-23 朱强 Landform monitoring and management system based on Internet
CN111931556A (en) * 2020-06-15 2020-11-13 国网安徽省电力有限公司电力科学研究院 Power transmission line icing monitoring and management system
US20200374005A1 (en) * 2019-05-24 2020-11-26 Nanjing University Of Aeronautics And Astronautics Indoor visible light positioning method and system based on single led lamp
CN112135103A (en) * 2020-09-24 2020-12-25 徐莉 Unmanned aerial vehicle safety monitoring system and method based on big data
CN112163113A (en) * 2020-09-07 2021-01-01 淮南万泰电子股份有限公司 Real-time monitoring system for high-voltage combined frequency converter

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002314994A (en) * 2001-04-13 2002-10-25 Matsushita Electric Ind Co Ltd System and method for estimating camera position
JP2004020398A (en) * 2002-06-18 2004-01-22 Nippon Telegr & Teleph Corp <Ntt> Method, device, and program for acquiring spatial information and recording medium recording program
US20150206337A1 (en) * 2014-01-17 2015-07-23 Nokia Corporation Method and apparatus for visualization of geo-located media contents in 3d rendering applications
CN104157020A (en) * 2014-08-12 2014-11-19 广州中国科学院沈阳自动化研究所分所 Hand-held intelligent inspection terminal
CN105045582A (en) * 2015-07-07 2015-11-11 西北工业大学 Mobile phone photographing behavior based event positioning method
CN106101689A (en) * 2016-06-13 2016-11-09 西安电子科技大学 Utilize the method that mobile phone monocular cam carries out augmented reality to virtual reality glasses
WO2018142533A1 (en) * 2017-02-02 2018-08-09 三菱電機株式会社 Position/orientation estimating device and position/orientation estimating method
CN109059895A (en) * 2018-03-28 2018-12-21 南京航空航天大学 A kind of multi-modal indoor ranging and localization method based on mobile phone camera and sensor
CN109520500A (en) * 2018-10-19 2019-03-26 南京航空航天大学 One kind is based on the matched accurate positioning of terminal shooting image and streetscape library acquisition method
CN109523592A (en) * 2018-10-19 2019-03-26 天津大学 A kind of interior flame localization method based on camera
US20200374005A1 (en) * 2019-05-24 2020-11-26 Nanjing University Of Aeronautics And Astronautics Indoor visible light positioning method and system based on single led lamp
CN110969792A (en) * 2019-11-04 2020-04-07 苏州再生宝智能物联科技有限公司 Intelligent anti-theft monitoring system based on Internet of things
CN110996003A (en) * 2019-12-16 2020-04-10 Tcl移动通信科技(宁波)有限公司 Photographing positioning method and device and mobile terminal
CN111931556A (en) * 2020-06-15 2020-11-13 国网安徽省电力有限公司电力科学研究院 Power transmission line icing monitoring and management system
CN111814663A (en) * 2020-07-07 2020-10-23 朱强 Landform monitoring and management system based on Internet
CN112163113A (en) * 2020-09-07 2021-01-01 淮南万泰电子股份有限公司 Real-time monitoring system for high-voltage combined frequency converter
CN112135103A (en) * 2020-09-24 2020-12-25 徐莉 Unmanned aerial vehicle safety monitoring system and method based on big data

Also Published As

Publication number Publication date
CN113188439B (en) 2022-08-12

Similar Documents

Publication Publication Date Title
CN113592989B (en) Three-dimensional scene reconstruction system, method, equipment and storage medium
CN106935683B (en) A kind of positioning of solar battery sheet SPEED VISION and correction system and its method
CN111191625A (en) Object identification and positioning method based on laser-monocular vision fusion
CN109341668B (en) Multi-camera measuring method based on refraction projection model and light beam tracking method
CN102141398A (en) Monocular vision-based method for measuring positions and postures of multiple robots
CN110738703B (en) Positioning method and device, terminal and storage medium
CN109859269B (en) Shore-based video auxiliary positioning unmanned aerial vehicle large-range flow field measuring method and device
CN110634138A (en) Bridge deformation monitoring method, device and equipment based on visual perception
CN109949231B (en) Method and device for collecting and processing city management information
CN109920009B (en) Control point detection and management method and device based on two-dimensional code identification
WO2012133371A1 (en) Image capture position and image capture direction estimation device, image capture device, image capture position and image capture direction estimation method and program
CN109145929A (en) One kind being based on SIFT scale space characteristics information extraction method
Mi et al. A vision-based displacement measurement system for foundation pit
Wohlfeil et al. Automatic camera system calibration with a chessboard enabling full image coverage
CN113188439B (en) Internet-based automatic positioning method for mobile phone camera shooting
CN110956668A (en) Focusing stack imaging system preset position calibration method based on focusing measure
CN114152610B (en) Slide cell scanning method based on visual target mark
CN112507838B (en) Pointer meter identification method and device and electric power inspection robot
CN115082509A (en) Method for tracking non-feature target
CN114492070A (en) High-precision mapping geographic information virtual simulation technology and device
CN108592789A (en) A kind of steel construction factory pre-assembly method based on BIM and machine vision technique
CN104200469A (en) Data fusion method for vision intelligent numerical-control system
CN112613369A (en) Method and system for calculating area of building window
CN113971799A (en) Vehicle nameplate information position detection method and system
CN111121637A (en) Grating displacement detection method based on pixel coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant