CN106909562B - Street view image acquisition method, device and system - Google Patents

Street view image acquisition method, device and system Download PDF

Info

Publication number
CN106909562B
CN106909562B CN201510979339.5A CN201510979339A CN106909562B CN 106909562 B CN106909562 B CN 106909562B CN 201510979339 A CN201510979339 A CN 201510979339A CN 106909562 B CN106909562 B CN 106909562B
Authority
CN
China
Prior art keywords
observation
server
straight line
street view
azimuth angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510979339.5A
Other languages
Chinese (zh)
Other versions
CN106909562A (en
Inventor
赵清伟
蔡云龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201510979339.5A priority Critical patent/CN106909562B/en
Publication of CN106909562A publication Critical patent/CN106909562A/en
Application granted granted Critical
Publication of CN106909562B publication Critical patent/CN106909562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The embodiment of the invention provides a street view image acquisition method, device and system, which are used for at least solving the problem that street view images of some places are difficult to acquire when street view images are acquired in the prior art. The method comprises the following steps: the method comprises the steps that a server receives shot images sent by different User Equipment (UE), and shooting position information and shooting azimuth information when each UE shoots corresponding images; and the server stores the images shot by different UEs, and the shooting position information and the shooting azimuth angle information when each UE shoots the corresponding image into an image library. The invention is applicable to the field of wireless communication.

Description

Street view image acquisition method, device and system
Technical Field
The invention relates to the field of wireless communication, in particular to a street view image acquisition method, device and system.
Background
When a user views a certain place through an electronic map, the user often wants to view street view image information of the place at the same time so as to further know detailed information of the place. For example, when viewing a certain tourist area, the user also wants to view the image information of the hot spot at the same time.
In the prior art, a street view image is a panoramic image obtained by taking a 360-degree live-action shot of a special street view car at street view points spaced apart by a certain distance on a street, and therefore, the attribute parameters of the street view image generally include: geographical coordinate information of the street sight and direction information of the street sight image.
However, some places may have street view images that are difficult to capture only by street view cars, such as at sightseeing in remote areas, where street view cars may not be available for framing.
Therefore, how to solve the problem that street view images of some places are difficult to obtain when street view images are obtained in the prior art becomes a problem to be solved urgently at present.
Disclosure of Invention
The embodiment of the invention provides a street view image acquisition method, device and system, which are used for at least solving the problem that street view images of some places are difficult to acquire when street view images are acquired in the prior art.
In order to achieve the above purpose, the embodiments of the present invention provide the following technical solutions:
in a first aspect, a method for obtaining a street view image is provided, where the method includes:
the method comprises the steps that a server receives shot images sent by different User Equipment (UE), and shooting position information and shooting azimuth information when each UE shoots corresponding images;
and the server stores the images shot by different UEs, and the shooting position information and the shooting azimuth angle information when each UE shoots the corresponding image into an image library.
The street view image does not need to be shot by the street view car like the prior art, so that the street view image of some places is difficult to obtain. By the method, when the street view image is obtained, only different UE (user equipment) need to send the shot image, the shooting position information and the shooting azimuth information when each UE shoots the corresponding image to the server, so that convenience and reliability of street view image obtaining are improved, and the street view image can be obtained from abundant sources, so that changes of a plurality of different angles and time points in one place can be reflected.
Optionally, the method may further include:
the server receives a request message sent by first UE, wherein the request message carries observation position information and observation azimuth information when a user observes street views;
the server determines a street view image sequence with matching degree meeting preset conditions with the observation position information and the observation azimuth angle information from the image library according to the observation position information and the observation azimuth angle information;
the server sends the street view image sequence to the first UE.
Therefore, the first UE can acquire the relevant street view image, so that the user can view the required street view image, and the user experience is improved.
Optionally, the determining, by the server, a street view image sequence whose matching degree with the observation position information and the observation azimuth information satisfies a preset condition from the image library according to the observation position information and the observation azimuth information includes:
the server establishes a rectangular coordinate system by taking the observation position corresponding to the observation position information as an origin, taking the north direction as the positive Y-axis direction and taking the east direction as the positive X-axis direction;
the server projects an observation azimuth angle corresponding to the observation azimuth angle information onto the rectangular coordinate system to obtain a first straight line;
the server searches all images shot at the shooting position which is less than a preset distance away from the observation position from the image library;
the server respectively projects the photographing position and the photographing azimuth angle corresponding to each image to the rectangular coordinate system according to the photographing position information and the photographing azimuth angle information corresponding to each image in all the images to obtain a plurality of photographing position points and a plurality of second straight lines;
the server respectively determines the intersection point of the first straight line and each second straight line in the plurality of second straight lines and determines whether the intersection point is legal or not;
and the server determines the area with the maximum legal intersection points in the preset length as a focusing area, and determines the image corresponding to the second straight line intersected with the first straight line in the focusing area as the street view image sequence.
Optionally, the projecting, by the server, the observation azimuth angle corresponding to the observation azimuth angle information onto the rectangular coordinate system to obtain a first straight line includes:
when 0 < a < 90 °, determining a straight line expression of the first straight line as Y ═ tan (90 ° -a) X, a representing the observation azimuth angle;
when 90 ° < a < 180 °, determining a straight line expression of the first straight line as Y ═ tan (90 ° -a) X;
when 180 ° < a < 270 °, determining a straight line expression of the first straight line as Y ═ tan (270 ° -a) X;
when 270 ° < a < 360 °, determining that the straight line expression of the first straight line is Y ═ tan (270 ° -a) X;
when a is equal to 0 or 180 °, determining that the straight line expression of the first straight line is X ═ 0;
when a is equal to 90 ° or 270 °, the straight line expression of the first straight line is determined to be Y-0.
Optionally, the determining, by the server, whether the intersection is legal includes:
the server determines whether the intersection point is legal or not according to a preset rule, wherein the preset rule comprises the following steps:
when the observation azimuth angle is less than 0 degrees and less than 90 degrees, the observation azimuth angle and the observation azimuth angle should intersect with a first quadrant { (x, y) | x >0, y >0 };
when the observation azimuth angle is less than 90 degrees and less than 180 degrees, the four quadrants are intersected with (x, y) | x >0 and y <0 };
when the observation azimuth angle is less than 270 degrees and is less than 180 degrees, the observation azimuth angle and the observation azimuth angle should intersect with a third quadrant { (x, y) | x <0 and y <0 };
when the observation azimuth angle is less than 360 degrees and is less than 270 degrees, the observation azimuth angle and the observation azimuth angle should intersect with a second quadrant { (x, y) | x <0, y >0 };
when 0 degrees < the azimuth angle of the image < 90 degrees, the first region { (x, y) | x > x1, y > y1} should be intersected;
when the azimuth angle of the image is less than 90 degrees and less than 180 degrees, a second region { (x, y) | x > x1, y < y1} should be intersected;
when the azimuth angle of the image is less than 180 degrees and less than 270 degrees, a third region { (x, y) | x < x1, y < y1} should be intersected;
when 270 ° < azimuth of the image < 360 °, it should intersect with the fourth region { (x, y) | x < x1, y > y1 };
the legal intersection point is in an overlapping area of a quadrant corresponding to the azimuth angle of the street view and an area corresponding to the azimuth angle of the image.
Optionally, before the server sends the street view image sequence to the first UE, the method further includes:
the server takes the observation azimuth as a starting point, and sequences the street view image sequence clockwise or anticlockwise according to a photographing azimuth;
the server sends the street view image sequence to the first UE, including:
and the server sends the sorted street view image sequence to the first UE.
Therefore, when the street view image sequence is displayed by the first UE, the street view image sequence which takes the observation azimuth as a starting point and is sorted clockwise or anticlockwise according to the photographing azimuth is displayed, so that the observation habit of the user is met, and the user experience is further improved.
In a second aspect, a method for obtaining a street view image is provided, the method comprising:
user Equipment (UE) sends a request message to a server, wherein the request message carries observation position information and observation azimuth information when a user observes street views;
and the UE receives a street view image sequence sent by the server, wherein the street view image sequence is determined by the server according to the observation position information and the observation azimuth angle information and meets the preset conditions with the observation position information and the observation azimuth angle information according to the observation position information and the observation azimuth angle information, and the pre-stored image library comprises images shot by different UEs, and shooting position information and shooting azimuth angle information when each UE shoots a corresponding image.
By the method, the UE can acquire the related street view images of one place at a plurality of different angles and time points, so that a user can view the needed street view images from a plurality of angles, and the user experience is improved.
Optionally, the receiving, by the UE, the street view image sequence sent by the server includes:
and the UE receives the sequenced street view image sequence sent by the server, wherein the street view image sequence takes the observation azimuth as a starting point and is sequenced clockwise or anticlockwise according to a photographing azimuth.
Therefore, when the UE displays the street view image sequence, the street view image sequence which takes the observation azimuth as a starting point and is sorted clockwise or anticlockwise according to the photographing azimuth is displayed, so that the observation habit of a user is met, and the user experience is further improved.
In a third aspect, a method for obtaining a street view image is provided, where the method includes:
user Equipment (UE) acquires a shot image, and shooting position information and shooting azimuth information when the UE shoots the image;
and the UE sends the image, the photographing position information and the photographing azimuth angle information to a server.
The street view image does not need to be shot by the street view car like the prior art, so that the street view image of some places is difficult to obtain. By the method, when the street view image is obtained, only different UE (user equipment) need to send the shot image, the shooting position information and the shooting azimuth information when each UE shoots the corresponding image to the server, so that convenience and reliability of street view image obtaining are improved, and the street view image can be obtained from abundant sources, so that changes of a plurality of different angles and time points in one place can be reflected.
In a fourth aspect, a server is provided, the server comprising: a receiving unit and a storage unit;
the receiving unit is used for receiving the shot images sent by different User Equipment (UE) and the shooting position information and the shooting azimuth angle information when each UE shoots the corresponding image;
the storage unit is configured to store the images captured by the different UEs, and the photographing position information and the photographing azimuth information when each UE captures the corresponding image, in an image library.
Optionally, the server further includes a processing unit and a sending unit;
the receiving unit is further configured to receive a request message sent by the first UE, where the request message carries observation position information and observation azimuth information when a user observes street views;
the processing unit is used for determining a street view image sequence which meets preset conditions with the observation position information and the observation azimuth information according to the observation position information and the observation azimuth information;
the sending unit is configured to send the street view image sequence to the first UE.
Optionally, the processing unit is specifically configured to:
establishing a rectangular coordinate system by taking the observation position corresponding to the observation position information as an origin, taking the north direction as the positive Y-axis direction and taking the east direction as the positive X-axis direction;
projecting an observation azimuth angle corresponding to the observation azimuth angle information onto the rectangular coordinate system to obtain a first straight line;
searching all images shot at the shooting position which is less than the preset distance away from the observation position from the image library;
respectively projecting the photographing position and the photographing azimuth angle corresponding to each image on the rectangular coordinate system according to the photographing position information and the photographing azimuth angle information corresponding to each image in all the images to obtain a plurality of photographing position points and a plurality of second straight lines;
respectively determining the intersection point of the first straight line and each second straight line in the plurality of second straight lines, and determining whether the intersection point is legal or not;
and determining a region with the maximum legal intersection point in a preset length as a focusing region, and determining an image corresponding to a second straight line which is intersected with the first straight line in the focusing region as the street view image sequence.
Optionally, the processing unit is specifically configured to:
when 0 < a < 90 °, determining a straight line expression of the first straight line as Y ═ tan (90 ° -a) X, a representing the observation azimuth angle;
when 90 ° < a < 180 °, determining a straight line expression of the first straight line as Y ═ tan (90 ° -a) X;
when 180 ° < a < 270 °, determining a straight line expression of the first straight line as Y ═ tan (270 ° -a) X;
when 270 ° < a < 360 °, determining that the straight line expression of the first straight line is Y ═ tan (270 ° -a) X;
when a is equal to 0 or 180 °, determining that the straight line expression of the first straight line is X ═ 0;
when a is equal to 90 ° or 270 °, the straight line expression of the first straight line is determined to be Y-0.
Optionally, the processing unit is specifically configured to:
determining whether the intersection point is legal or not according to a preset rule, wherein the preset rule comprises the following steps:
when the observation azimuth angle is less than 0 degrees and less than 90 degrees, the observation azimuth angle and the observation azimuth angle should intersect with a first quadrant { (x, y) | x >0, y >0 };
when the observation azimuth angle is less than 90 degrees and less than 180 degrees, the four quadrants are intersected with (x, y) | x >0 and y <0 };
when the observation azimuth angle is less than 270 degrees and is less than 180 degrees, the observation azimuth angle and the observation azimuth angle should intersect with a third quadrant { (x, y) | x <0 and y <0 };
when the observation azimuth angle is less than 360 degrees and is less than 270 degrees, the observation azimuth angle and the observation azimuth angle should intersect with a second quadrant { (x, y) | x <0, y >0 };
when 0 degrees < the azimuth angle of the image < 90 degrees, the first region { (x, y) | x > x1, y > y1} should be intersected;
when the azimuth angle of the image is less than 90 degrees and less than 180 degrees, a second region { (x, y) | x > x1, y < y1} should be intersected;
when the azimuth angle of the image is less than 180 degrees and less than 270 degrees, a third region { (x, y) | x < x1, y < y1} should be intersected;
when 270 ° < azimuth of the image < 360 °, it should intersect with the fourth region { (x, y) | x < x1, y > y1 };
the legal intersection point is in an overlapping area of a quadrant corresponding to the azimuth angle of the street view and an area corresponding to the azimuth angle of the image.
Optionally, the processing unit is further configured to sort the street view image sequence clockwise or counterclockwise according to a photographing azimuth by using the observation azimuth as a starting point before the sending unit sends the street view image sequence to the first UE;
the sending unit is specifically configured to:
and sending the sequenced street view image sequence to the first UE.
Since the server provided in the embodiment of the present invention may execute the method for obtaining a street view image in the first aspect or any implementation manner of the first aspect, the technical effect that can be obtained by the server may also refer to the method embodiment, and details are not described here.
In a fifth aspect, a UE is provided, the UE comprising: a transmitting unit and a receiving unit;
the sending unit is used for sending a request message to the server, wherein the request message carries observation position information and observation azimuth information when a user observes street views;
the receiving unit is used for receiving a street view image sequence sent by the server, wherein the street view image sequence is determined by the server from a pre-stored image library according to the observation position information and the observation azimuth information, and the matching degree of the street view image sequence with the observation position information and the observation azimuth information meets a preset condition, and the pre-stored image library comprises images shot by different UEs, and shooting position information and shooting azimuth information when each UE shoots a corresponding image.
Optionally, the receiving unit is specifically configured to:
and receiving a sequenced street view image sequence sent by the server, wherein the street view image sequence takes the observation azimuth as a starting point and is sequenced clockwise or anticlockwise according to a photographing azimuth.
Since the UE provided in the embodiment of the present invention may execute the method for obtaining a street view image in the second aspect or any optional implementation manner of the second aspect, the technical effect that can be obtained by the UE may also refer to the method embodiment, and details are not described here.
In a sixth aspect, a UE is provided, the UE comprising: a processing unit and a transmitting unit;
the processing unit is used for acquiring a shot image, and shooting position information and shooting azimuth information when the UE shoots the image;
the sending unit is used for sending the image, the photographing position information and the photographing azimuth angle information to a server.
Since the UE provided in the embodiment of the present invention may execute the method for obtaining a street view image according to the third aspect, the technical effect that can be obtained by the UE may also refer to the method embodiment, and details are not repeated here.
In a seventh aspect, a server is provided, including: a processor, a memory, a system bus, and a communication interface;
the memory is configured to store a computer-executable instruction, the processor is connected to the memory through the system bus, and when the server runs, the processor executes the computer-executable instruction stored in the memory, so that the server executes the street view image obtaining method described in the first aspect or any one of the possible implementation manners of the first aspect.
Since the server provided in the embodiment of the present invention may execute the method for obtaining a street view image in the first aspect or any implementation manner of the first aspect, the technical effect that can be obtained by the server may also refer to the method embodiment, and details are not described here.
In an eighth aspect, a UE is provided, including: a processor, a memory, a system bus, and a communication interface;
the memory is configured to store a computer executable instruction, the processor is connected to the memory through the system bus, and when the UE runs, the processor executes the computer executable instruction stored in the memory, so that the UE executes the street view image obtaining method described in any one of the second aspect and the second possible implementation manner or executes the street view image obtaining method described in the third aspect.
Since the UE provided in the embodiment of the present invention may use the method for obtaining a street view image described in any one of the possible implementation manners of the second aspect or the second aspect, or perform the method for obtaining a street view image described in the third aspect, the technical effect that can be obtained by the UE may also refer to the method embodiment, and details are not repeated here.
A ninth aspect provides a readable medium, which includes computer executable instructions, and when a processor of a server executes the computer executable instructions, the server executes the street view image obtaining method as described in the first aspect or any one of the possible implementation manners of the first aspect.
In an eighth aspect, a readable medium is provided, which includes computer executable instructions, and when a processor of a user equipment UE executes the computer executable instructions, the UE executes the street view image obtaining method described in any one of the second aspect and the second possible implementation manner or executes the street view image obtaining method described in the third aspect.
These and other aspects of the invention will be apparent from, and elucidated with reference to, the embodiments described hereinafter.
Drawings
Fig. 1 is a schematic diagram of an architecture of a street view image acquisition system according to an embodiment of the present invention;
fig. 2 is a first schematic flow chart of a street view image obtaining method according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of a street view image obtaining method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the intersection of the observation information and the image information provided by the embodiment of the present invention;
fig. 5 is a schematic flow chart of a third method for obtaining street view images according to an embodiment of the present invention;
fig. 6 is a first schematic structural diagram of a server according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a server according to an embodiment of the present invention;
fig. 8 is a first schematic structural diagram of a UE according to an embodiment of the present invention;
fig. 9 is a second schematic structural diagram of a UE according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a server according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
It should be noted that, for the convenience of clearly describing the technical solutions of the embodiments of the present invention, in the embodiments of the present invention, words such as "first" and "second" are used to distinguish the same items or similar items with substantially the same functions and actions, and those skilled in the art can understand that the words such as "first" and "second" do not limit the quantity and execution order.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
The wireless communication network in the present application is a network that provides a wireless communication function. The wireless communication network may employ different communication technologies such as Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Time Division Multiple Access (TDMA), Frequency Division Multiple Access (FDMA), Orthogonal Frequency Division Multiple Access (OFDMA), carrier multiple access/collision avoidance (SC-FDMA), Code Division Multiple Access (CDMA), and code division multiple access (FDMA). Networks can be classified into 2G (english: generation) networks, 3G networks or 4G networks according to factors such as capacity, rate and delay of different networks. Typical 2G networks include global system for mobile communications (GSM) networks or General Packet Radio Service (GPRS) networks, typical 3G networks include Universal Mobile Telecommunications System (UMTS) networks, and typical 4G networks include Long Term Evolution (LTE) networks. The UMTS network may also be referred to as Universal Terrestrial Radio Access Network (UTRAN), and the LTE network may also be referred to as evolved universal terrestrial radio access network (E-UTRAN). According to different resource allocation modes, the method can be divided into a cellular communication network and a Wireless Local Area Network (WLAN), wherein the cellular communication network is mainly scheduled and the WLAN is mainly competitive. The aforementioned 2G, 3G and 4G networks are all cellular communication networks. It should be understood by those skilled in the art that the technical solutions provided by the embodiments of the present invention may also be applied to other wireless communication networks, such as 4.5G or 5G networks, or other non-cellular communication networks as the technology advances. For simplicity, embodiments of the present invention may sometimes refer to a wireless communication network as a network.
The UE (user equipment, abbreviated in english) is a terminal device, and may be a mobile terminal device or an immobile terminal device. The device is mainly used for receiving or sending service data. The user equipments may be distributed in networks where the user equipments have different names, such as: a terminal, mobile station, subscriber unit, station, cellular telephone, personal digital assistant, wireless modem, wireless communication device, handheld device, laptop computer, cordless telephone, wireless local loop station, or the like. The user equipment may communicate with one or more core networks, such as to exchange voice and/or data with a Radio Access Network (RAN), via an access portion of the radio communication network.
Further, various aspects, embodiments, or features will be presented in terms of systems that may include a number of devices, components, modules, and the like. It is to be understood and appreciated that the various systems may include additional devices, components, modules, etc. and/or may not include all of the devices, components, modules etc. discussed in connection with the figures. Furthermore, a combination of these schemes may also be used.
In addition, in the embodiments of the present invention, words such as "exemplary" or "like" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described herein as "exemplary" or "such as" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts in a concrete fashion.
In the embodiments of the present invention, "of", "corresponding" and "corresponding" may be sometimes used in combination, and it should be noted that the intended meaning is consistent when the difference is not emphasized.
The network architecture and the service scenario described in the embodiment of the present invention are for more clearly illustrating the technical solution of the embodiment of the present invention, and do not form a limitation on the technical solution provided in the embodiment of the present invention, and it can be known by those skilled in the art that the technical solution provided in the embodiment of the present invention is also applicable to similar technical problems along with the evolution of the network architecture and the appearance of a new service scenario.
A schematic structural diagram of a street view image acquisition system to which the embodiment of the present invention is applied is given below, as shown in fig. 1, including: a server 101, and a plurality of UEs (e.g., UE1, UE2, UE3, UE4, UE5, UE6, etc.) in communication with the server 101.
The following description will be made by taking an interaction between the server 101 and any UE (assumed to be UE1) as an example based on the street view image acquisition system shown in fig. 1, and the street view image acquisition method provided by the embodiment of the present invention is shown in fig. 2, and includes steps S201 to S204:
s201, the UE1 acquires the shot image, and the shooting position information and the shooting azimuth angle information when the UE1 shoots the image.
The UE1 may include a camera state monitoring module and an image information obtaining module, and when the camera performs a shooting operation, the camera state monitoring module may monitor the operation and further notify the image information obtaining module, so that the image information obtaining module may obtain a shot image, and shooting position information and shooting azimuth information of the UE1 when shooting the image.
S202, the UE1 transmits the photographed image, and photographing position information and photographing azimuth information when the UE1 photographs the image, to the server.
S203, the server receives the shot image sent by the UE1 and the shooting position information and the shooting azimuth angle information when the UE1 shoots the image.
S204, the server stores the shot image sent by the UE1, and the shooting position information and the shooting azimuth angle information when the UE1 shoots the image into an image library.
Specifically, in step S201 of the embodiment of the present invention:
the photographing location information of the UE1 when the image is photographed specifically refers to the geographical coordinate information of the UE1 when the image is photographed, for example, the geographical coordinate information may specifically be longitude information and latitude information of a location where the UE1 photographs the image, which is not specifically limited in this embodiment of the present invention.
The shooting azimuth information of the UE1 when shooting the image specifically means a horizontal angle between the observing direction and the due north direction when the UE1 shoots the image. The north direction is the direction of the north pole of the earth.
It should be noted that the embodiment shown in fig. 2 is only described by taking an example of interaction between one UE (taking the UE1 as an example) and a server. In a specific implementation process, the above interaction may be performed between a plurality of different UEs and a server, and the present invention is not described in detail herein.
It should be noted that fig. 1 only illustrates 6 UEs, and certainly, the number of UEs communicating with the server 101 is not limited to 6, and may be 4, 8 or any number, and this is not limited in this embodiment of the present invention.
The street view image does not need to be shot by the street view car like the prior art, so that the street view image of some places is difficult to obtain. According to the street view image acquisition method provided by the embodiment of the invention, when the street view image is acquired, only different UE (user equipment) need to send the shot image, the shooting position information and the shooting azimuth information when each UE shoots the corresponding image to the server, so that the convenience and the reliability of street view image acquisition are improved, and the street view image acquisition sources are rich, thereby reflecting the change of a plurality of different angles and time points in one place.
Optionally, based on the street view image obtaining method shown in fig. 2, after the server stores the captured images sent by the multiple UEs, and the photographing position information and the photographing azimuth information when each UE captures the images in the image library, as shown in fig. 3, the method may further include steps S301 to S305:
s301, the first UE sends a request message to the server, wherein the request message carries observation position information and observation azimuth information when the user observes street views.
S302, the server receives a request message sent by the first UE.
S303, the server determines a street view image sequence with matching degree meeting preset conditions with the observation position information and the observation azimuth information from an image library according to the observation position information and the observation azimuth information.
S304, the server sends the street view image sequence to the first UE.
S305, the first UE receives the street view image sequence sent by the server.
Specifically, in step S301 in the embodiment of the present invention:
the observation position information when the user observes the street view specifically refers to geographic coordinate information when the user observes the street view, for example, the geographical coordinate information may specifically be longitude information and latitude information of a position where the user observes the street view, and this is not specifically limited in the embodiment of the present invention.
The observation azimuth information when the user observes the street view specifically means a horizontal angle between an observation direction when the user observes the street view and a due north direction. The north direction is the direction of the north pole of the earth.
The first UE may be triggered to send the request message to the server after the user specifies the observation position and the observation azimuth on the electronic map, or may be triggered to send the request message to the server after the user inputs the observation position and the observation azimuth, which is not specifically limited in the embodiment of the present invention.
It should be noted that, in the embodiment of the present invention, the first UE may be any one UE shown in the system for acquiring a street view image shown in fig. 1, and the first UE may be a UE that transmits a captured image to a server, or may not transmit the captured image to the server, which is not limited in the embodiment of the present invention.
Specifically, in steps S303 to S305 in the embodiment of the present invention:
the image library comprises images shot by different UEs, and shooting position information and shooting azimuth information when each UE shoots a corresponding image. Therefore, after the server acquires the observation position information and the observation azimuth information of the user, the observation position information and the observation azimuth information can be matched with the image information in the image library to obtain image sequences of related street views at different angles, and the street view image sequences are fed back to the first UE. Therefore, the first UE can acquire the relevant street view images of one place at a plurality of different angles and time points, so that a user can view the required street view images from a plurality of angles, and the user experience is improved.
Optionally, in step S303 in the embodiment of the present invention:
the server determines, from the image library, a street view image sequence whose matching degree with the observation position information and the observation azimuth information satisfies a preset condition according to the observation position information and the observation azimuth information, which may specifically include the following steps S1-S6:
and S1, the server establishes a rectangular coordinate system by taking the observation position corresponding to the observation position information as an origin, the positive north direction as the positive Y-axis direction and the positive east direction as the positive X-axis direction.
S2, the server projects the observation azimuth angle corresponding to the observation azimuth angle information onto the rectangular coordinate system to obtain a first straight line.
And S3, the server searches all images shot at the shooting position which is less than the preset distance away from the observation position from the image library.
S4, the server respectively projects the photographing position and the photographing azimuth angle corresponding to each image to the rectangular coordinate system according to the photographing position information and the photographing azimuth angle information corresponding to each image in all the images to obtain a plurality of photographing position points and a plurality of second straight lines.
S5, the server determines the intersection point of the first straight line and each second straight line in the plurality of second straight lines respectively, and determines whether the intersection point is legal.
S6, the server determines the area containing the maximum legal intersection points in the preset length as a focus area, and determines the image corresponding to the second straight line intersected with the first straight line in the focus area as a streetscape image sequence.
The process of S1-S6 described above will be described in detail below based on the intersection diagram of the observation information and the image information shown in fig. 4.
Specifically, in step S1 in the embodiment of the present invention:
as shown in fig. 4, the server establishes a rectangular coordinate system with the observation position corresponding to the observation position information as the origin, the north direction as the positive Y-axis direction, and the east direction as the positive X-axis direction.
Specifically, in step S2 in the embodiment of the present invention:
the server projects the observation azimuth corresponding to the observation azimuth information onto the rectangular coordinate system to obtain a first straight line, such as the straight line 1 in fig. 4.
In a possible implementation manner, the projecting, by the server, the observation azimuth corresponding to the observation azimuth information onto the rectangular coordinate system to obtain the first straight line may specifically include:
when 0 < a < 90 °, the straight line expression of the first straight line is determined to be Y ═ tan (90 ° -a) X, and a represents the street view azimuth angle. For example, when a is 45 °, Y is X. Alternatively, the first and second electrodes may be,
when 90 ° < a < 180 °, the straight line expression of the first straight line is determined to be Y ═ tan (90 ° -a) X. For example, when a is 135 °, Y is — X. Alternatively, the first and second electrodes may be,
when 180 ° < a < 270 °, the straight line expression of the first straight line is determined to be Y ═ tan (270 ° -a) X. For example, when a is 225 °, Y is X. Alternatively, the first and second electrodes may be,
when 270 ° < a < 360 °, the straight line expression of the first straight line is determined to be Y ═ tan (270 ° -a) X. For example, when a is 315 °, Y is — X.
When a is equal to 0 or 180 °, determining that the straight line expression of the first straight line is X ═ 0;
when a is equal to 90 ° or 270 °, the straight line expression of the first straight line is determined to be Y-0.
Specifically, in step S3 in the embodiment of the present invention:
the server may look up images near the viewing position from a pre-stored image library, e.g., all images within a radius of 30 meters from the viewing position as the origin.
It should be noted that the preset distance may be an empirical value, or may be a preferred value obtained through a plurality of experiments, and this is not specifically limited in the embodiment of the present invention.
Specifically, in step S4 in the embodiment of the present invention:
the photographing position and the photographing azimuth angle corresponding to each image may be projected onto the rectangular coordinate system, respectively, to obtain a plurality of photographing position points, which may be, for example, a point a (x1, y1), a point B (x2, y2), and a point C (x3, y3) in fig. 4, and a plurality of second straight lines, which may be, for example, a straight line 2 at the point a, a straight line 3 at the point B, and a straight line 4 at the point C in fig. 4.
It should be noted that the corresponding relationship between the slopes of the straight lines 2, 3, and 4 and the photographing azimuth may be the same as the corresponding relationship between the slope of the first straight line and the observation azimuth when the observation azimuth is projected onto the rectangular coordinate system to obtain the first straight line, and the embodiment of the present invention is not described herein again.
For example, assuming that the photographing azimuth angle at the point B is 120 °, the slope of the straight line 3 is tan (90 ° -a) ═ tan (-30 °) -1/2 according to the above correspondence between the observation azimuth angle and the slope. And further, according to the point-slope formula, a straight line expression passing through the point B can be determined.
Specifically, in step S5 in the embodiment of the present invention:
as can be seen from fig. 4, the straight line 2 intersects the straight line 1 at the intersection point 1, the straight line 3 intersects the straight line 1 at the intersection point 2, and the straight line 4 intersects the straight line 1 at the intersection point 3.
And the server determines whether the intersection 1, the intersection 2 and the intersection 3 are legal, respectively. Wherein the content of the first and second substances,
the server determines whether the intersection is legal, which may be specifically implemented as follows:
the server determines whether the intersection point is legal according to a preset rule, wherein the preset rule specifically includes:
when the observation azimuth angle is less than 0 degrees and less than 90 degrees, the observation azimuth angle and the observation azimuth angle should intersect with a first quadrant { (x, y) | x >0, y >0 };
when the observation azimuth angle is less than 90 degrees and less than 180 degrees, the four quadrants are intersected with (x, y) | x >0 and y <0 };
when the observation azimuth angle is less than 270 degrees and is less than 180 degrees, the observation azimuth angle and the observation azimuth angle should intersect with a third quadrant { (x, y) | x <0 and y <0 };
should intersect the second quadrant { (x, y) | x <0, y >0} when 270 ° < the observation azimuth < 360 °
When 0 degrees < the azimuth angle of the image < 90 degrees, the first region { (x, y) | x > x1, y > y1} should be intersected;
when the azimuth angle of the image is less than 90 degrees and less than 180 degrees, a second region { (x, y) | x > x1, y < y1} should be intersected;
when the azimuth angle of the image is less than 180 degrees and less than 270 degrees, a third region { (x, y) | x < x1, y < y1} should be intersected;
when 270 ° < azimuth of the image < 360 °, it should intersect with the fourth region { (x, y) | x < x1, y > y1 };
the legal intersection point is in an overlapping area of a quadrant corresponding to the azimuth angle of the street view and an area corresponding to the azimuth angle of the image.
That is, since the lines projected with the azimuth information in the information of the observer and the information of the image extend both in the original azimuth direction and in the opposite direction, there is a possibility that the intersection point falls in the opposite direction of the azimuth, and it should be determined whether the intersection point is legitimate or not according to the above rule.
Illustratively, taking the judgment of whether the intersection point 1 is legal as an example, as can be seen from fig. 4, the azimuth angle of the image corresponding to the point a is between 180 ° and 270 °, the observation azimuth angle is between 0 ° and 90 °, and when 180 ° < the azimuth angle of the image < 270 °, the intersection should be found in a third region { (x, y) | x < x1, y < y1}, which should be the lower left corner region bounded by the dotted line 1 and the dotted line 2 in fig. 4; when the observation azimuth angle is less than 0 degrees and less than 90 degrees, the intersection is defined as the first quadrant { (x, y) | x >0, y >0}, and the quadrant corresponding to the azimuth angle of the street view is the first quadrant. Further, since the overlapping region of the quadrant corresponding to the azimuth of the street view and the region corresponding to the azimuth of the image is within the rectangular region formed by four points of (0, y1), (x1, y1), (x1, 0), and (0, 0), and the intersection 1 is not within the rectangular region, it can be determined that the intersection 1 is an illegal intersection.
In addition, as can be seen from fig. 4, the intersection 1 is located in the opposite direction of the observation azimuth, and therefore, it can also be confirmed that the intersection 1 is an illegal intersection.
Similarly, the intersection point 2 and the intersection point 3 can also be determined to be legal intersection points by the above method, and the embodiment of the present invention is not described in detail herein.
Specifically, in step S6 in the embodiment of the present invention:
s6, the server determines the area containing the maximum legal intersection points in the preset length as a focus area, and determines the image corresponding to the second straight line intersected with the first straight line in the focus area as a streetscape image sequence.
Illustratively, as shown in fig. 4, a suitable length (e.g., 3 meters) may be selected to translate on the straight line represented by the observation azimuth, so that the line segment can cover as many legal intersections as possible, and when the number of the legal intersections is the largest, the area covered by the line segment at that time is determined as the focus area. Assuming that the legal intersection points included in the focus area are the intersection point 2 and the intersection point 3, it can be determined that the image B corresponding to the straight line 3 where the intersection point 2 is located and the image C corresponding to the straight line 4 where the intersection point 3 is located are street view image sequences.
Optionally, based on the embodiment shown in fig. 3, as shown in fig. 5, before the server sends the street view image sequence to the first UE (step S304), the method may further include:
s306, the server takes the observation azimuth as a starting point, and the street view image sequence is sorted clockwise or anticlockwise according to the photographing azimuth.
Further, the server sends the street view image sequence to the first UE (step S304), which may specifically include:
s304a, the server sends the sorted street view image sequence to the first UE.
The receiving, by the first UE, the street view image sequence sent by the server (step S205), specifically may include:
s305a, the first UE receives the sorted street view image sequence sent by the server.
That is to say, in the embodiment of the present invention, after the server obtains the street view image sequences, the street view image sequences may be further sorted clockwise or counterclockwise according to the photographing azimuth by using the observation azimuth as a starting point, and then the sorted street view image sequences are sent to the first UE, so that when the street view image sequences are displayed by the first UE, the street view image sequences sorted clockwise or counterclockwise according to the photographing azimuth by using the observation azimuth as the starting point are displayed, and thus, the viewing habits of the user are met, and the user experience is further improved.
As shown in fig. 6, the embodiment of the present invention provides a street view image acquiring apparatus, which may be a server 60, configured to perform the steps performed by the server in the street view image acquiring method shown in fig. 2, fig. 3, or fig. 5. The server 60 may include units corresponding to the respective steps, and for example, may include: a receiving unit 601 and a storage unit 602.
A receiving unit 601, configured to receive photographed images sent by different UEs, and photographing position information and photographing azimuth information when each UE photographs a corresponding image
The storage unit 602 is configured to store the images captured by different UEs, and the photographing position information and the photographing azimuth information when each UE captures a corresponding image in the image library.
Optionally, as shown in fig. 7, the server 60 further includes a processing unit 603 and a sending unit 604.
The receiving unit 601 is further configured to receive a request message sent by the first UE, where the request message carries observation position information and observation azimuth information when the user observes street views.
And the processing unit 603 is configured to determine, according to the observation position information and the observation azimuth information, a street view image sequence whose matching degree with the observation position information and the observation azimuth information satisfies a preset condition from the image library.
A sending unit 604, configured to send the street view image sequence to the first UE.
Optionally, the processing unit 603 is specifically configured to:
and establishing a rectangular coordinate system by taking the observation position corresponding to the observation position information as an origin, taking the north direction as the positive Y-axis direction and the east direction as the positive X-axis direction.
And projecting the observation azimuth angle corresponding to the observation azimuth angle information onto the rectangular coordinate system to obtain a first straight line.
And searching all images shot at the shooting position which is away from the observation position by a distance less than a preset distance from the image library.
And respectively projecting the photographing position and the photographing azimuth angle corresponding to each image on the rectangular coordinate system according to the photographing position information and the photographing azimuth angle information corresponding to each image in all the images to obtain a plurality of photographing position points and a plurality of second straight lines.
And respectively determining the intersection point of the first straight line and each second straight line in the plurality of second straight lines, and determining whether the intersection point is legal or not.
And determining a region with the maximum legal intersection point in a preset length as a focusing region, and determining an image corresponding to a second straight line which is intersected with the first straight line in the focusing region as the street view image sequence.
Optionally, the processing unit 603 is specifically configured to:
when 0 < a < 90 °, determining a straight line expression of the first straight line as Y ═ tan (90 ° -a) X, a representing the observation azimuth angle; alternatively, the first and second electrodes may be,
when 90 ° < a < 180 °, determining a straight line expression of the first straight line as Y ═ tan (90 ° -a) X; alternatively, the first and second electrodes may be,
when 180 ° < a < 270 °, determining a straight line expression of the first straight line as Y ═ tan (270 ° -a) X; alternatively, the first and second electrodes may be,
when 270 ° < a < 360 °, determining that the straight line expression of the first straight line is Y ═ tan (270 ° -a) X; alternatively, the first and second electrodes may be,
when a is equal to 0 or 180 °, determining that the straight line expression of the first straight line is X ═ 0; alternatively, the first and second electrodes may be,
when a is equal to 90 ° or 270 °, the straight line expression of the first straight line is determined to be Y-0.
Optionally, the processing unit 603 is specifically configured to:
determining whether the intersection point is legal or not according to a preset rule, wherein the preset rule comprises the following steps:
when the observation azimuth angle is less than 0 degrees and less than 90 degrees, the observation azimuth angle and the observation azimuth angle should intersect with a first quadrant { (x, y) | x >0, y >0 }; alternatively, the first and second electrodes may be,
when the observation azimuth angle is less than 90 degrees and less than 180 degrees, the four quadrants are intersected with (x, y) | x >0 and y <0 }; alternatively, the first and second electrodes may be,
when the observation azimuth angle is less than 270 degrees and is less than 180 degrees, the observation azimuth angle and the observation azimuth angle should intersect with a third quadrant { (x, y) | x <0 and y <0 }; alternatively, the first and second electrodes may be,
when the observation azimuth angle is less than 360 degrees and is less than 270 degrees, the observation azimuth angle and the observation azimuth angle should intersect with a second quadrant { (x, y) | x <0, y >0 }; alternatively, the first and second electrodes may be,
when 0 degrees < the azimuth angle of the image < 90 degrees, the first region { (x, y) | x > x1, y > y1} should be intersected; alternatively, the first and second electrodes may be,
when the azimuth angle of the image is less than 90 degrees and less than 180 degrees, a second region { (x, y) | x > x1, y < y1} should be intersected; alternatively, the first and second electrodes may be,
when the azimuth angle of the image is less than 180 degrees and less than 270 degrees, a third region { (x, y) | x < x1, y < y1} should be intersected; alternatively, the first and second electrodes may be,
when 270 ° < azimuth of the image < 360 °, it should intersect with the fourth region { (x, y) | x < x1, y > y1 };
the legal intersection point is in an overlapping area of a quadrant corresponding to the azimuth angle of the street view and an area corresponding to the azimuth angle of the image.
Optionally, the processing unit 603 is further configured to sort the street view image sequence clockwise or counterclockwise according to the photographing azimuth by using the observation azimuth as a starting point before the sending unit 604 sends the street view image sequence to the first UE.
The sending unit 604 is specifically configured to:
and sending the sequenced street view image sequence to the first UE.
It can be understood that the server 60 according to the embodiment of the present invention may correspond to the server in the method for acquiring a street view image shown in fig. 2, fig. 3, or fig. 5, and the division and/or the functions of the units in the server 60 according to the embodiment of the present invention are all for realizing the flow of the method for acquiring a street view image shown in fig. 2, fig. 3, or fig. 5, and therefore, for brevity, no further description is provided here.
Since the server 60 in the embodiment of the present invention may be configured to execute the method flow, reference may also be made to the method embodiment for obtaining technical effects, and details of the embodiment of the present invention are not repeated herein.
As shown in fig. 8, an embodiment of the present invention provides a street view image acquiring apparatus, which may be UE80, for performing the steps performed by UE1 in the above street view image acquiring method shown in fig. 2. The UE80 may include units corresponding to the respective steps, and may include, for example: a processing unit 801 and a transmitting unit 802.
A processing unit 801 configured to acquire a captured image, and photographing position information and photographing azimuth information when the UE80 captures the image;
a sending unit 802, configured to send the image, the photographing position information, and the photographing azimuth information to a server.
It is to be understood that the UE80 in the embodiment of the present invention may correspond to the UE1 in the method for acquiring a street view image shown in fig. 2, and the division and/or the functions of the units in the server in the embodiment of the present invention are all for implementing the flow of the method for acquiring a street view image shown in fig. 2, and are not described herein again for brevity.
Since the UE80 in the embodiment of the present invention may be configured to execute the above method process, reference may also be made to the above method embodiment for technical effects that can be obtained by the UE80, and the details of the embodiment of the present invention are not repeated herein.
As shown in fig. 9, an embodiment of the present invention provides a street view image acquiring apparatus, which may be UE90, configured to perform the steps performed by the first UE in the above street view image acquiring method shown in fig. 3 or fig. 5. The UE90 may include units corresponding to the respective steps, and may include a sending unit 901 and a receiving unit 902, for example.
A sending unit 901, configured to send a request message to a server, where the request message carries observation position information and observation azimuth information when a user observes street views.
A receiving unit 902, configured to receive a street view image sequence sent by the server, where the street view image sequence is determined by the server according to the observation position information and the observation azimuth information, and a matching degree with the observation position information and the observation azimuth information satisfies a preset condition from a pre-stored image library, where the pre-stored image library includes images shot by different UEs 90, and shooting position information and shooting azimuth information when each UE90 shoots a corresponding image.
Optionally, the receiving unit 902 is specifically configured to:
and receiving a sequenced street view image sequence sent by the server, wherein the street view image sequence takes the observation azimuth as a starting point and is sequenced clockwise or anticlockwise according to a photographing azimuth.
It is to be understood that the UE90 according to the embodiment of the present invention may correspond to the first UE in the street view image obtaining method shown in fig. 3 or fig. 5, and the division and/or the functions of the units in the server according to the embodiment of the present invention are all for implementing the street view image obtaining method flow shown in fig. 3 or fig. 5, and are not described herein again for brevity.
Since the UE90 in the embodiment of the present invention may be configured to execute the above method process, reference may also be made to the above method embodiment for technical effects that can be obtained by the UE90, and the details of the embodiment of the present invention are not repeated herein.
As shown in fig. 10, an embodiment of the present invention provides an apparatus for acquiring a street view image, where the apparatus may be a server 100, and the apparatus includes: a processor 1001, a memory 1002, a bus 1003, and a communication interface 1004.
The memory 1002 is used for storing computer-executable instructions, the processor 1001 is connected to the memory 1002 through a bus, and when the server 100 runs, the processor 1001 executes the computer-executable instructions stored in the memory 1003, so that the server 100 executes the steps executed by the server in the street view image acquisition method shown in fig. 2, 3 or 5. For a specific street view image obtaining method, reference may be made to the related description in the embodiments shown in fig. 2, fig. 3, or fig. 5, and details are not repeated here.
The processor 1001 in the embodiment of the present invention may be a Central Processing Unit (CPU), or may also be another general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a field-programmable gate array (FPGA), or another programmable logic device, discrete gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor may also be a dedicated processor, which may include at least one of a baseband processing chip, a radio frequency processing chip, and the like. Further, the dedicated processor may also include a chip having other dedicated processing functions of the server 100.
The memory 1002 may include a volatile memory (RAM), such as a random-access memory (RAM); the memory 1002 may also include a non-volatile memory (english: non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a hard disk (hard disk drive, HDD) or a solid-state drive (SSD); additionally, memory 1002 may also include a combination of the above types of memory.
The bus 1003 may include a data bus, a power bus, a control bus, a signal status bus, and the like. In this embodiment, for clarity of illustration, various buses are illustrated as bus 1003 in FIG. 10.
The communication interface 1004 may specifically be a transceiver on the server 100. The transceiver may be a wireless transceiver. For example, the wireless transceiver may be an antenna of the server 100 or the like. The processor 1001 transmits and receives data to and from other devices, for example, a UE, via the communication interface 1004.
In a specific implementation process, each step in the method flow shown in fig. 2, fig. 3 or fig. 5 can be implemented by the processor 1001 in a hardware form executing computer execution instructions in a software form stored in the memory 1002. To avoid repetition, further description is omitted here.
Since the server 100 provided in the embodiment of the present invention can be used to execute the above method process, the technical effects obtained by the server can refer to the above method embodiment, and are not described herein again.
As shown in fig. 11, an embodiment of the present invention provides an apparatus for acquiring a street view image, where the apparatus may be a UE110, and the apparatus includes: a processor 1101, a memory 1102, a bus 1103, and a communication interface 1104.
The memory 1102 is configured to store computer executable instructions, and the processor 1101 is connected to the memory 1102 through a bus, and when the UE110 runs, the processor 1101 executes the computer executable instructions stored in the memory 1103, so as to enable the UE110 to perform the steps performed by the UE1 in the street view image obtaining method shown in fig. 2 or to enable the UE110 to perform the steps performed by the first UE in the street view image obtaining method shown in fig. 3 or 5. For a specific street view image obtaining method, reference may be made to the related description in the embodiments shown in fig. 2, fig. 3, or fig. 5, and details are not repeated here.
The processor 1101 in the embodiment of the present invention may be a CPU, or may also be other general processors, DSPs, ASICs, FPGAs, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor may also be a dedicated processor, which may include at least one of a baseband processing chip, a radio frequency processing chip, and the like. Further, the dedicated processor may also include chips with other dedicated processing functions for UE 110.
Memory 1102 may include volatile memory (RAM); the memory 1102 may also include a non-volatile memory (non-volatile memory), such as a ROM, a flash memory, a HDD or an SSD; additionally, memory 1102 may also comprise a combination of the above-described types of memory.
The bus 1103 may include a data bus, a power bus, a control bus, a signal status bus, and the like. In this embodiment, for clarity of illustration, various buses are illustrated as bus 1103 in FIG. 11.
Communication interface 1104 may specifically be a transceiver on UE 110. The transceiver may be a wireless transceiver. For example, the wireless transceiver may be an antenna of the UE110, or the like. The processor 1101 transmits and receives data to and from other devices, such as a server, via the communication interface 1104.
In a specific implementation, each step in the method flow shown in fig. 2, fig. 3 or fig. 5 can be implemented by the processor 1101 in a hardware form executing computer execution instructions in a software form stored in the memory 1102. To avoid repetition, further description is omitted here.
Since the UE110 provided in the embodiment of the present invention may be configured to execute the above method procedure, the technical effect obtained by the UE may refer to the above method embodiment, and will not be described herein again.
Optionally, the embodiment further provides a readable medium, which includes a computer executing instruction, and when the processor of the server executes the computer executing instruction, the base station may perform the steps performed by the server in the street view image obtaining method shown in fig. 2, fig. 3, or fig. 5. For a specific street view image obtaining method, reference may be made to the above description in the embodiments shown in fig. 2, fig. 3, or fig. 5, and details thereof are not repeated here.
Optionally, the present embodiment also provides a readable medium, which includes computer executable instructions, and when a processor of the UE executes the computer executable instructions, the UE may perform the steps performed by the UE1 in the street view image obtaining method shown in fig. 2 or perform the steps performed by the first UE in the street view image obtaining method shown in fig. 3 or fig. 5. For a specific street view image obtaining method, reference may be made to the above description in the embodiments shown in fig. 2, fig. 3, or fig. 5, and details thereof are not repeated here.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the above-described apparatus is only illustrated by the division of the above functional modules, and in practical applications, the above-described function distribution may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to perform all or part of the above-described functions. For the specific working processes of the system, the apparatus, and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: u disk, removable hard disk, ROM, RAM), magnetic disk or optical disk, etc.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (14)

1. A street view image acquisition method is characterized by comprising the following steps:
the method comprises the steps that a server receives shot images sent by different User Equipment (UE), and shooting position information and shooting azimuth information when each UE shoots corresponding images;
the server stores the images shot by different UEs, and the shooting position information and the shooting azimuth angle information when each UE shoots the corresponding image into an image library;
the method further comprises the following steps:
the server receives a request message sent by first UE, wherein the request message carries observation position information and observation azimuth information when a user observes street views; the server determines a street view image sequence with matching degree meeting preset conditions with the observation position information and the observation azimuth information from the image library according to the observation position information and the observation azimuth information;
the server sends the street view image sequence to the first UE;
the server determines a street view image sequence which meets preset conditions with the observation position information and the observation azimuth information according to the observation position information and the observation azimuth information from the image library, and the method comprises the following steps:
the server establishes a rectangular coordinate system by taking the observation position corresponding to the observation position information as an origin, taking the north direction as the positive Y-axis direction and taking the east direction as the positive X-axis direction;
the server projects an observation azimuth angle corresponding to the observation azimuth angle information onto the rectangular coordinate system to obtain a first straight line;
the server searches all images shot at the shooting position which is less than a preset distance away from the observation position from the image library;
the server respectively projects the photographing position and the photographing azimuth angle corresponding to each image to the rectangular coordinate system according to the photographing position information and the photographing azimuth angle information corresponding to each image in all the images to obtain a plurality of photographing position points and a plurality of second straight lines;
the server respectively determines the intersection point of the first straight line and each second straight line in the plurality of second straight lines and determines whether the intersection point is legal or not;
and the server determines the area with the maximum legal intersection points in the preset length as a focusing area, and determines the image corresponding to the second straight line intersected with the first straight line in the focusing area as the street view image sequence.
2. The method of claim 1, wherein the server projects an observation azimuth corresponding to the observation azimuth information onto the rectangular coordinate system to obtain a first straight line, and comprises:
when 0 < a < 90 °, determining a straight line expression of the first straight line as Y ═ tan (90 ° -a) X, a representing the observation azimuth angle;
when 90 ° < a < 180 °, determining a straight line expression of the first straight line as Y ═ tan (90 ° -a) X;
when 180 ° < a < 270 °, determining a straight line expression of the first straight line as Y ═ tan (270 ° -a) X;
when 270 ° < a < 360 °, determining that the straight line expression of the first straight line is Y ═ tan (270 ° -a) X;
when a is equal to 0 or 180 °, determining that the straight line expression of the first straight line is X ═ 0;
when a is equal to 90 ° or 270 °, the straight line expression of the first straight line is determined to be Y-0.
3. The method of claim 1 or 2, wherein the server determining whether the intersection is legitimate comprises:
the server determines whether the intersection point is legal or not according to a preset rule, wherein the preset rule comprises the following steps:
when the observation azimuth angle is less than 0 degrees and less than 90 degrees, the first quadrant { (x, y) | x >0, y >0} is intersected;
when the observation azimuth angle is less than 180 degrees and is less than 90 degrees, the four quadrants { (x, y) | x >0 and y <0} are intersected;
when the observation azimuth angle is less than 270 degrees and is less than 180 degrees, the intersection should be intersected with a third quadrant { (x, y) | x <0, y <0 };
should intersect in a second quadrant { (x, y) | x <0, y >0} when 270 ° < the observed azimuth < 360 °;
when 0 ° < azimuth of the image < 90 °, it should intersect with the first region { (x, y) | x > x1, y > y1 };
when the azimuth angle of the image is less than 90 degrees and less than 180 degrees, the second region { (x, y) | x > x1 should be intersected, y < y1 };
when 180 ° < azimuth of the image < 270 °, it should intersect with a third region { (x, y) | x < x1, y < y1 };
when 270 ° < azimuth of the image < 360 °, it should intersect with the fourth region { (x, y) | x < x1, y > y1 };
the legal intersection point is in an overlapping area of a quadrant corresponding to the azimuth angle of the street view and an area corresponding to the azimuth angle of the image.
4. The method of any of claims 1-3, further comprising, prior to the server sending the street view image sequence to the first UE:
the server takes the observation azimuth as a starting point, and sequences the street view image sequence clockwise or anticlockwise according to a photographing azimuth;
the server sends the street view image sequence to the first UE, including:
and the server sends the sorted street view image sequence to the first UE.
5. A street view image acquisition method is characterized by comprising the following steps:
user Equipment (UE) sends a request message to a server, wherein the request message carries observation position information and observation azimuth information when a user observes street views;
the UE receives a street view image sequence sent by the server, wherein the street view image sequence is determined by the server according to the observation position information and the observation azimuth angle information from a pre-stored image library, and the matching degree of the street view image sequence with the observation position information and the observation azimuth angle information meets a preset condition, wherein the pre-stored image library comprises images shot by different UEs, and shooting position information and shooting azimuth angle information when each UE shoots a corresponding image;
the server determines a street view image sequence which meets preset conditions with the matching degree of the observation position information and the observation azimuth angle information from a pre-stored image library according to the observation position information and the observation azimuth angle information, and the street view image sequence comprises the following steps:
the server establishes a rectangular coordinate system by taking the observation position corresponding to the observation position information as an origin, taking the north direction as the positive Y-axis direction and taking the east direction as the positive X-axis direction;
the server projects an observation azimuth angle corresponding to the observation azimuth angle information onto the rectangular coordinate system to obtain a first straight line;
the server searches all images shot at the shooting position which is less than a preset distance away from the observation position from the image library;
the server respectively projects the photographing position and the photographing azimuth angle corresponding to each image to the rectangular coordinate system according to the photographing position information and the photographing azimuth angle information corresponding to each image in all the images to obtain a plurality of photographing position points and a plurality of second straight lines;
the server respectively determines the intersection point of the first straight line and each second straight line in the plurality of second straight lines and determines whether the intersection point is legal or not;
and the server determines the area with the maximum legal intersection points in the preset length as a focusing area, and determines the image corresponding to the second straight line intersected with the first straight line in the focusing area as the street view image sequence.
6. The method of claim 5, wherein the UE receives the street view image sequence sent by the server, and wherein the receiving comprises:
and the UE receives the sequenced street view image sequence sent by the server, wherein the street view image sequence takes the observation azimuth as a starting point and is sequenced clockwise or anticlockwise according to a photographing azimuth.
7. A server, characterized in that the server comprises: a receiving unit and a storage unit;
the receiving unit is used for receiving the shot images sent by different User Equipment (UE) and the shooting position information and the shooting azimuth angle information when each UE shoots the corresponding image;
the storage unit is used for storing the images shot by different UEs, and the shooting position information and the shooting azimuth angle information when each UE shoots the corresponding image into an image library;
the server also comprises a processing unit and a sending unit;
the receiving unit is further configured to receive a request message sent by the first UE, where the request message carries observation position information and observation azimuth information when a user observes street views;
the processing unit is used for determining a street view image sequence which meets preset conditions with the observation position information and the observation azimuth information according to the observation position information and the observation azimuth information;
the sending unit is configured to send the street view image sequence to the first UE;
the processing unit is specifically configured to:
establishing a rectangular coordinate system by taking the observation position corresponding to the observation position information as an origin, taking the north direction as the positive Y-axis direction and taking the east direction as the positive X-axis direction;
projecting an observation azimuth angle corresponding to the observation azimuth angle information onto the rectangular coordinate system to obtain a first straight line;
searching all images shot at the shooting position which is less than the preset distance away from the observation position from the image library;
respectively projecting the photographing position and the photographing azimuth angle corresponding to each image on the rectangular coordinate system according to the photographing position information and the photographing azimuth angle information corresponding to each image in all the images to obtain a plurality of photographing position points and a plurality of second straight lines;
respectively determining the intersection point of the first straight line and each second straight line in the plurality of second straight lines, and determining whether the intersection point is legal or not;
and determining a region with the maximum legal intersection point in a preset length as a focusing region, and determining an image corresponding to a second straight line which is intersected with the first straight line in the focusing region as the street view image sequence.
8. The server according to claim 7, wherein the processing unit is specifically configured to:
when 0 < a < 90 °, determining a straight line expression of the first straight line as Y ═ tan (90 ° -a) X, a representing the observation azimuth angle;
when 90 ° < a < 180 °, determining a straight line expression of the first straight line as Y ═ tan (90 ° -a) X;
when 180 ° < a < 270 °, determining a straight line expression of the first straight line as Y ═ tan (270 ° -a) X;
when 270 ° < a < 360 °, determining that the straight line expression of the first straight line is Y ═ tan (270 ° -a) X;
when a is equal to 0 or 180 °, determining that the straight line expression of the first straight line is X ═ 0;
when a is equal to 90 ° or 270 °, the straight line expression of the first straight line is determined to be Y-0.
9. The server according to claim 7 or 8, wherein the processing unit is specifically configured to:
determining whether the intersection point is legal or not according to a preset rule, wherein the preset rule comprises the following steps:
when the observation azimuth angle is less than 0 degrees and less than 90 degrees, the first quadrant { (x, y) | x >0, y >0} is intersected;
when the observation azimuth angle is less than 180 degrees and is less than 90 degrees, the four quadrants { (x, y) | x >0 and y <0} are intersected;
when the observation azimuth angle is less than 270 degrees and is less than 180 degrees, the intersection should be intersected with a third quadrant { (x, y) | x <0, y <0 };
should intersect in a second quadrant { (x, y) | x <0, y >0} when 270 ° < the observed azimuth < 360 °;
when 0 ° < azimuth of the image < 90 °, it should intersect with the first region { (x, y) | x > x1, y > y1 };
when the azimuth angle of the image is less than 90 degrees and less than 180 degrees, the second region { (x, y) | x > x1 should be intersected, y < y1 };
when 180 ° < azimuth of the image < 270 °, it should intersect with a third region { (x, y) | x < x1, y < y1 };
when 270 ° < azimuth of the image < 360 °, it should intersect with the fourth region { (x, y) | x < x1, y > y1 };
the legal intersection point is in an overlapping area of a quadrant corresponding to the azimuth angle of the street view and an area corresponding to the azimuth angle of the image.
10. The server according to any one of claims 7-9,
the processing unit is further configured to sort the street view image sequence clockwise or counterclockwise according to a photographing azimuth by using the observation azimuth as a starting point before the sending unit sends the street view image sequence to the first UE;
the sending unit is specifically configured to:
and sending the sequenced street view image sequence to the first UE.
11. A User Equipment (UE), the UE comprising: a transmitting unit and a receiving unit;
the sending unit is used for sending a request message to the server, wherein the request message carries observation position information and observation azimuth information when a user observes street views;
the receiving unit is used for receiving a street view image sequence sent by the server, wherein the street view image sequence is determined by the server from a pre-stored image library according to the observation position information and the observation azimuth angle information, and the matching degree of the street view image sequence with the observation position information and the observation azimuth angle information meets a preset condition, wherein the pre-stored image library comprises images shot by different UEs, and shooting position information and shooting azimuth angle information when each UE shoots a corresponding image;
the server determines a street view image sequence which meets preset conditions with the matching degree of the observation position information and the observation azimuth angle information from a pre-stored image library according to the observation position information and the observation azimuth angle information, and the street view image sequence comprises the following steps:
the server establishes a rectangular coordinate system by taking the observation position corresponding to the observation position information as an origin, taking the north direction as the positive Y-axis direction and taking the east direction as the positive X-axis direction;
the server projects an observation azimuth angle corresponding to the observation azimuth angle information onto the rectangular coordinate system to obtain a first straight line;
the server searches all images shot at the shooting position which is less than a preset distance away from the observation position from the image library;
the server respectively projects the photographing position and the photographing azimuth angle corresponding to each image to the rectangular coordinate system according to the photographing position information and the photographing azimuth angle information corresponding to each image in all the images to obtain a plurality of photographing position points and a plurality of second straight lines;
the server respectively determines the intersection point of the first straight line and each second straight line in the plurality of second straight lines and determines whether the intersection point is legal or not;
and the server determines the area with the maximum legal intersection points in the preset length as a focusing area, and determines the image corresponding to the second straight line intersected with the first straight line in the focusing area as the street view image sequence.
12. The UE of claim 11, wherein the receiving unit is specifically configured to:
and receiving a sequenced street view image sequence sent by the server, wherein the street view image sequence takes the observation azimuth as a starting point and is sequenced clockwise or anticlockwise according to a photographing azimuth.
13. A server, comprising: a processor, a memory, a system bus, and a communication interface;
the memory is used for storing computer-executable instructions, the processor is connected with the memory through the system bus, and when the server runs, the processor executes the computer-executable instructions stored by the memory so as to enable the server to execute the street view image acquisition method according to any one of claims 1 to 4.
14. A User Equipment (UE), comprising: a processor, a memory, a system bus, and a communication interface;
the memory is used for storing computer-executable instructions, the processor is connected with the memory through the system bus, and when the UE runs, the processor executes the computer-executable instructions stored by the memory so as to enable the UE to execute the street view image acquisition method according to any one of claims 5-6.
CN201510979339.5A 2015-12-23 2015-12-23 Street view image acquisition method, device and system Active CN106909562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510979339.5A CN106909562B (en) 2015-12-23 2015-12-23 Street view image acquisition method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510979339.5A CN106909562B (en) 2015-12-23 2015-12-23 Street view image acquisition method, device and system

Publications (2)

Publication Number Publication Date
CN106909562A CN106909562A (en) 2017-06-30
CN106909562B true CN106909562B (en) 2020-07-07

Family

ID=59200484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510979339.5A Active CN106909562B (en) 2015-12-23 2015-12-23 Street view image acquisition method, device and system

Country Status (1)

Country Link
CN (1) CN106909562B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069123B (en) * 2018-01-22 2022-02-18 腾讯科技(深圳)有限公司 Method and device for checking information point collection validity
CN113091757B (en) * 2019-12-23 2022-09-27 百度在线网络技术(北京)有限公司 Map generation method and device
CN112100418A (en) * 2020-09-11 2020-12-18 北京百度网讯科技有限公司 Method and device for inquiring historical street view, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955485A (en) * 2014-04-11 2014-07-30 王玉娇 Server, system and related method capable of realizing real-time electronic map
CN104050177A (en) * 2013-03-13 2014-09-17 腾讯科技(深圳)有限公司 Street view generation method and server
CN104199944A (en) * 2014-09-10 2014-12-10 重庆邮电大学 Method and device for achieving street view exhibition
CN104376007A (en) * 2013-08-14 2015-02-25 高德软件有限公司 POI (point of interest) street view image displaying method and device
CN104915432A (en) * 2015-06-18 2015-09-16 百度在线网络技术(北京)有限公司 Streetscape image acquisition method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8169414B2 (en) * 2008-07-12 2012-05-01 Lim Seung E Control of electronic games via finger angle using a high dimensional touchpad (HDTP) touch user interface
US9171527B2 (en) * 2012-11-20 2015-10-27 Google Inc. System and method for displaying geographic imagery
CN103150759B (en) * 2013-03-05 2015-11-25 腾讯科技(深圳)有限公司 A kind of method and apparatus street view image being carried out to Dynamic contrast enhance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050177A (en) * 2013-03-13 2014-09-17 腾讯科技(深圳)有限公司 Street view generation method and server
CN104376007A (en) * 2013-08-14 2015-02-25 高德软件有限公司 POI (point of interest) street view image displaying method and device
CN103955485A (en) * 2014-04-11 2014-07-30 王玉娇 Server, system and related method capable of realizing real-time electronic map
CN104199944A (en) * 2014-09-10 2014-12-10 重庆邮电大学 Method and device for achieving street view exhibition
CN104915432A (en) * 2015-06-18 2015-09-16 百度在线网络技术(北京)有限公司 Streetscape image acquisition method and device

Also Published As

Publication number Publication date
CN106909562A (en) 2017-06-30

Similar Documents

Publication Publication Date Title
US11751011B2 (en) Positioning method and user equipment
US10510193B2 (en) Method and system for geofencing of vehicle impound yards
US9582937B2 (en) Method, apparatus and computer program product for displaying an indication of an object within a current field of view
US9107037B2 (en) Determining points of interest using intelligent agents and semantic data
US10939240B2 (en) Location information processing method and apparatus, storage medium and processor
CN106909562B (en) Street view image acquisition method, device and system
US9384395B2 (en) Method for providing augmented reality, and user terminal and access point using the same
US20140286324A1 (en) Method and/or system for passive location estimation
EP2820868A1 (en) Positioning method and apparatus and computer program product
WO2012131151A1 (en) Methods and apparatuses for generating a panoramic image
EP3249883A1 (en) Information sharing method and apparatus
KR102341623B1 (en) Methods, devices and systems for performing drive test minimization measurements
CN109156025B (en) Uplink resource acquisition method, device and computer readable storage medium
WO2019084842A1 (en) Wireless communication method and device
KR100853379B1 (en) Method for transforming based position image file and service server thereof
RU2733280C1 (en) Method and apparatus for transmitting system information
WO2022237071A1 (en) Locating method and apparatus, and electronic device, storage medium and computer program
CN111954252B (en) Unauthorized unmanned aerial vehicle detection method, device and system
EP3627817B1 (en) Image processing method and terminal
CN109413409B (en) Data processing method, MEC server and terminal equipment
CN110674234B (en) Map data acquisition method, apparatus and storage medium
JP5521706B2 (en) Posting information sharing system, posting information sharing method, posting information sharing server, program, and storage medium
CN105653664A (en) Visual information processing method and system
CN110719574A (en) Network access method and related equipment
CN111371814A (en) Monitoring and processing method and device of electronic equipment and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant