WO2017054327A1 - 确定图像的待叠加区域、叠加图像、图片呈现方法和装置 - Google Patents

确定图像的待叠加区域、叠加图像、图片呈现方法和装置 Download PDF

Info

Publication number
WO2017054327A1
WO2017054327A1 PCT/CN2015/097585 CN2015097585W WO2017054327A1 WO 2017054327 A1 WO2017054327 A1 WO 2017054327A1 CN 2015097585 W CN2015097585 W CN 2015097585W WO 2017054327 A1 WO2017054327 A1 WO 2017054327A1
Authority
WO
WIPO (PCT)
Prior art keywords
superimposed
image
area
scene picture
information
Prior art date
Application number
PCT/CN2015/097585
Other languages
English (en)
French (fr)
Inventor
俞淑平
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Priority to KR1020177021630A priority Critical patent/KR20170102517A/ko
Priority to EP15905238.0A priority patent/EP3242225B1/en
Priority to JP2017541282A priority patent/JP6644800B2/ja
Priority to US15/549,081 priority patent/US10380748B2/en
Publication of WO2017054327A1 publication Critical patent/WO2017054327A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/38Outdoor scenes
    • G06V20/39Urban scenes

Definitions

  • the present application relates to the field of computer technologies, and in particular, to the field of Internet technologies, and in particular, to determining an area to be superimposed, an overlay image, and a picture presentation method and apparatus for an image.
  • the Street View image provides great convenience for users to view real geographic information.
  • the real street, building and other information in the street view allows users to know the real situation around their place of interest without leaving the house.
  • the image information in the street view picture is fixed information collected at a certain time. Therefore, before the street view is completely updated, the parts in the street view picture cannot be individually updated, added or modified.
  • the information contained in the Street View image that the user has obtained is stale and may not match the actual situation.
  • the purpose of the present application is to propose an improved method for determining an image to be superimposed, a superimposed image, a picture presenting method and apparatus, aiming at solving the technical problems mentioned in the above background art.
  • the present application provides a method for determining an area to be superimposed of an image in a scene picture, including: acquiring a scene picture; determining an area to be superimposed in the scene picture; and receiving an image to be superimposed based on the update request of the user, wherein And the update request includes identity information of the image to be superimposed; and determining an area to be superimposed that matches the image to be superimposed based on the identity information area.
  • determining the area to be superimposed in the scene picture comprises: dividing the scene picture into a plurality of candidate superimposed areas; satisfying the predetermined condition based on the candidate superimposed area, using the candidate superimposed area as the area to be superimposed; and adding the area to be superimposed Identification information.
  • the identification information of the area to be superimposed includes: geographic location information of the area to be superimposed, size information of the area to be superimposed, and time information to be updated of the area to be superimposed;
  • the identity information of the image to be superimposed includes: the image to be superimposed The geographical location information, the size information of the image to be superimposed, and the time information to be updated of the image to be superimposed.
  • the predetermined condition includes that the expected update frequency of the candidate overlay region is higher than the predetermined frequency.
  • the present application provides a method for superimposing an image in a scene picture, which includes: receiving identification information of an area to be superimposed in a scene picture, where the identification information of the area to be superimposed includes geographic location information of the area to be superimposed, The size information of the area to be superimposed and the time information to be updated; and the identity information of the image to be superimposed is matched with the identification information of the area to be superimposed, and the image to be superimposed is uploaded to the server, wherein the identity information of the image to be superimposed includes the geographical location information of the image. , size information of the image, and time information of the image to be updated.
  • the present disclosure provides a method for presenting a scene picture, including: receiving a scene picture acquisition request of a user, where the scene picture acquisition request includes geographic location information of a scene picture to be requested; and obtaining a match with the scene picture acquisition request a first scene picture; the scene picture includes a to-be-superimposed area, a matching image to be superimposed is added to the area to be superimposed to form a second scene picture; and the second scene picture is presented to the user.
  • the present application provides an apparatus for determining an area to be superimposed of an image in a scene picture, including: an obtaining module configured to acquire a scene picture; and a determining module configured to determine an area to be superimposed in the scene picture; a receiving module, configured to receive an image to be superimposed based on a user's update request, wherein the update request includes identity information of the image to be superimposed; and a matching module configured to determine an area to be superimposed that matches the image to be superimposed based on the identity information.
  • the determining module is further configured to: divide the scene picture into a plurality of candidate superimposed regions; and select the candidate superimposed regions based on the candidate superimposed regions satisfying the predetermined condition
  • the domain acts as a region to be superimposed; and adds identification information to the region to be superimposed.
  • the identification information of the area to be superimposed includes: geographic location information of the area to be superimposed, size information of the area to be superimposed, and time information to be updated of the area to be superimposed;
  • the identity information of the image to be superimposed includes: the image to be superimposed The geographical location information, the size information of the image to be superimposed, and the time information to be updated of the image to be superimposed.
  • the predetermined condition includes that the expected update frequency of the candidate overlay region is higher than the predetermined frequency.
  • the present application provides an apparatus for superimposing an image in a scene picture, including: a receiving module, configured to receive identification information of an area to be superimposed in a scene picture, where the identification information of the area to be superimposed includes to be superimposed a location information of the area, size information of the area to be superimposed, and time information to be updated; and an uploading module configured to match the identity information of the image to be superimposed with the identification information of the area to be superimposed, and upload the image to be superimposed to the server, where
  • the identity information of the image to be superimposed includes geographic location information of the image, size information of the image, and time information of the image to be updated.
  • the present disclosure provides a device for presenting a scene picture, including: a receiving module, configured to receive a scene image obtaining request of a user, where the scene image obtaining request includes geographic location information of a scene image to be requested; a module, configured to acquire a first scene image that matches the scene image acquisition request, and a module that is configured to include a to-be-superimposed region based on the scene image, and add a matching image to be superimposed to the to-be-superimposed region to form a second scene image. And a rendering module configured to present the second scene picture to the user.
  • the method for determining a region to be superimposed, the image to be superimposed, and the method for presenting a picture can perform a scene image by determining an area to be superimposed in the scene picture and adding an image to be superimposed to be matched in the area to be superimposed. Partially updated, or superimposed images in the scene picture, thereby improving the efficiency of scene picture update and making the scene picture presented to the user more realistic.
  • FIG. 1 is an exemplary system architecture diagram to which the present application can be applied;
  • FIG. 2 is an example of an interaction process of determining a scene to be superimposed, superimposing an image, and presenting a superimposed image in a scene picture according to an embodiment of the present application;
  • FIG. 3 is a schematic flowchart of a method for determining an area to be superimposed of an image in a scene picture according to an embodiment of the present application
  • FIG. 4 is a schematic flowchart of a method for superimposing an image in a scene picture according to an embodiment of the present application
  • FIG. 5 is a schematic flowchart of a method for presenting a scene picture according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an apparatus for determining an area to be superimposed of an image in a scene picture according to an embodiment of the present application
  • FIG. 7 is a schematic structural diagram of an apparatus for superimposing an image in a scene picture according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a device for presenting a scene picture according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server of an embodiment of the present application.
  • FIG. 1 illustrates an exemplary system architecture 100 of an embodiment of a web page generation method or web page generation apparatus to which the present application may be applied.
  • the system architecture 100 may include a user 110 and terminal devices 111, 112, 113 corresponding to the user 110, a network 104, a server 105, a server 106, a user 120, and a terminal device 121 corresponding to the user 120, 122, 123.
  • the network 104 is used at the terminal devices 111, 112, 113, the server 105, and the server 106 of the user 110.
  • Network 104 may include various types of connections, such as wired, wireless communication links, fiber optic cables, and the like.
  • the user 110 can interact with the server 105 over the network 104 using the terminal devices 111, 112, 113 to receive or transmit messages and the like.
  • user 120 can interact with server 106 over network 104 using terminal devices 121, 122, 123 to receive or send messages and the like.
  • server 105 and server 106 can also interact with each other via network 104 to receive or send messages and the like.
  • Various communication client applications such as a web browser application, a street view map application, a search application, an instant communication tool, a mailbox client, and a social platform, may be installed on the terminal devices 111, 112, and 113 and the terminal devices 121, 122, and 123.
  • the terminal devices 111, 112, 113 and the terminal devices 121, 122, 123 may be various electronic devices having a display screen, including but not limited to smart phones, tablets, e-book readers, MP3 players (Moving Picture Experts Group Audio) Layer III, motion picture expert compresses standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV) player, laptop portable computer and desktop computer, and so on.
  • MP3 players Motion Picture Experts Group Audio
  • MP4 Moving Picture Experts Group Audio Layer IV
  • the server 105 and the server 106 may be servers that provide various services, such as a street view map server that provides a street view map to the terminal devices 111, 112, 113 and/or the terminal devices 121, 122, 123, or based on the terminal devices 111, 112, 113. And/or an image processing server of the terminal device 121, 122, 123 requesting to add an image to a relevant location in the Street View map.
  • a street view map server that provides a street view map to the terminal devices 111, 112, 113 and/or the terminal devices 121, 122, 123, or based on the terminal devices 111, 112, 113.
  • an image processing server of the terminal device 121, 122, 123 requesting to add an image to a relevant location in the Street View map.
  • the method for determining an area to be superimposed of an image in a scene picture may be performed by the server 105 or the server 106. Accordingly, the device for determining an area to be superimposed of the image in the scene picture may be set. In the server 105 or the server 106.
  • the method for superimposing an image in a scene picture provided by the embodiment of the present application may be performed by the terminal devices 111, 112, 113 and/or the terminal devices 121, 122, 123. Accordingly, the device for superimposing an image in the scene picture may be set in Terminal devices 111, 112, 113 and/or In the terminal devices 121, 122, 123.
  • the method for presenting the scene picture provided by the embodiment of the present application may be performed by the server 105 or the server 106. Accordingly, the device of the scene picture may be set in the server 105 or the server 106.
  • terminal devices, networks, and servers in Figure 1 is merely illustrative. Depending on the implementation needs, there can be any number of terminal devices, networks, and servers.
  • FIG. 2 an example of an interaction process of determining a region to be superimposed of an image, superimposing an image, and presenting a superimposed image in a scene picture according to an embodiment of the present application is shown.
  • the image processing server acquires a scene picture from the street view map server.
  • each scene picture can be stored in the Street View Map Server.
  • each scene picture may have information characterizing its geographic location (eg, city, street, house number, etc., or latitude and longitude).
  • the image processing server may determine an area to be superimposed in the scene picture it has acquired.
  • the image processing server may determine an area to be superimposed in the scene picture based on a predetermined rule. For example, in some alternative implementations, the image processing server may identify street view objects (eg, buildings, street lights, etc.) contained in the scene picture and view the area in which the objects are located as the area to be superimposed.
  • street view objects eg, buildings, street lights, etc.
  • step 203 the second client acquires the identification information of the area to be superimposed.
  • the identification information may be information capable of determining the position of the area to be superimposed in one-to-one correspondence.
  • step 204 the second client determines whether the area to be superimposed matches the image to be superimposed.
  • the meaning of the word "match” may be, for example, that the area to be superimposed is suitable for adding this Some images to be superimposed.
  • step 205 if the area to be superimposed matches the image to be superimposed, the second client may send an update request to the image processing server.
  • the image to be superimposed may also be sent to the image processing server.
  • step 206 the image processing server determines an area to be superimposed that matches the image to be superimposed.
  • step 204 it may be determined by the second client whether the area to be superimposed matches the image to be superimposed.
  • step 206 it may also be determined by the image processing server whether the area to be superimposed matches the image to be superimposed.
  • the second client may first determine whether the to-be-superimposed region matches the image to be superimposed based on certain judgment criteria and/or conditions, and then the image processing server is based on the same as the second client or Different judgment criteria and/or conditions are used to further determine whether the area to be superimposed matches the image to be superimposed.
  • step 207 when the first client needs to acquire a scene picture, it may send a scene picture acquisition request to the street view map server.
  • the street view map server may store a plurality of different scene pictures
  • the first client desires to obtain only one or a part of the plurality of different scene pictures. Therefore, in these application scenarios, the scene picture acquisition request sent by the first client may include related information of the part of the scene picture that it is desired to acquire. That is to say, after receiving the scene image acquisition request sent by the first client, the street view map server may search for an area of all scene images (for example, a database for storing scene images) according to the scene image acquisition request. Go to the scene pictures that these first clients expect to get.
  • the street view server acquires a first scene picture corresponding to the scene picture acquisition request sent by the first client.
  • the Street View server may retrieve and acquire a corresponding first scene picture based on a scene picture acquisition request sent by the first client from a database storing the scene picture.
  • the street view map server acquires an image to be superimposed that matches the first scene picture from the image processing server.
  • the street view map server may correspondingly obtain an image to be superimposed that matches the image from the image processing server based on the related information of the first scene image.
  • the street view map server generates a second scene picture based on the first scene picture and the image to be superimposed that matches the first scene picture, and sends the second scene picture to the first client.
  • the street view map server may determine whether there is an area to be updated (the area to be superimposed) in the scene picture (the first scene picture). If yes, the corresponding image is superimposed in these areas and a new scene picture (second scene picture) is generated and sent to the first client, thereby realizing the update of the partial area in the scene picture.
  • FIG. 3 is a schematic flowchart 300 of a method for determining an area to be superimposed of an image in a scene picture according to an embodiment of the present application.
  • an electronic device for example, the image processing server shown in FIG. 2 on which the method for determining an area to be superimposed of an image is determined in a scene picture may be from a client and/or by a wired connection or a wireless connection. Or other servers (such as the Street View Map Server shown in Figure 2) to obtain relevant information.
  • the above wireless connection manner may include but is not limited to 3G/4G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, and other wireless connection methods that are now known or developed in the future. .
  • a scene picture is obtained.
  • a scene picture may be obtained from a storage location where the scene picture is stored (eg, in a database storing the scene picture).
  • step 320 an area to be superimposed in the scene picture is determined.
  • the area to be superimposed in the scene picture may be determined in a similar manner as in step 202 described above.
  • step 330 an image to be superimposed is received based on the update request of the user.
  • the update request includes identity information of the image to be superimposed.
  • the user may be, for example, a user corresponding to the second client in FIG. 2.
  • step 340 an area to be superimposed that matches the image to be superimposed is determined based on the identity information.
  • determining the area to be superimposed in the scene picture in step 320 can be performed in the manner described below.
  • the scene picture is divided into a plurality of candidate superimposed areas.
  • the image recognition technology may be used to divide the scene picture according to the contour of each street view object included in the scene picture.
  • step 322 the candidate superimposed region is taken as the region to be superimposed based on the candidate superimposed region satisfying the predetermined condition.
  • step 323 identification information is added to the area to be superimposed.
  • the identification information of the area to be superimposed may include, for example, geographic location information of the area to be superimposed, size information of the area to be superimposed, and time information to be updated of the area to be superimposed.
  • the identity information of the image to be superimposed may include, for example, geographic location information of the image to be superimposed, size information of the image to be superimposed, and time information to be updated of the image to be superimposed.
  • the predetermined condition may include, for example, that the expected update frequency of the candidate superimposed region is higher than the predetermined frequency.
  • FIG. 4 is a schematic flowchart 400 of a method for superimposing an image in a scene picture according to an embodiment of the present application.
  • the method of superimposing an image in a scene picture of the present embodiment may be run on a client (eg, the second client in FIG. 2).
  • the identification information of the area to be superimposed in the scene picture is received.
  • the identification information of the area to be superimposed may include, for example, geographic location information of the area to be superimposed, size information of the area to be superimposed, and time information to be updated.
  • the image to be superimposed is uploaded to the server.
  • the identity information of the image to be superimposed may also include geographic location information of the image, size information of the image, and time information of the image to be updated.
  • FIG. 5 is a schematic flowchart 500 of a method for presenting a scene picture according to an embodiment of the present application.
  • the method for presenting the scene picture of the embodiment may be run on a server (for example, the Street View Map server in FIG. 2).
  • step 510 a scene picture acquisition request of the user is received, where the scene picture acquisition request includes geographic location information of the scene picture to be requested.
  • step 520 a first scene picture that matches the scene picture acquisition request is acquired.
  • step 530 based on the scene picture, the area to be superimposed is added, and the image to be superimposed matching is added to the area to be superimposed to form a second scene picture.
  • step 540 a second scene picture is presented to the user (eg, the user corresponding to the first client in FIG. 2).
  • the server may superimpose the partially updated image on the corresponding position in the scene image, and the superposed scene is The picture is presented to the user.
  • FIG. 6 is a schematic structural diagram 600 of an apparatus for determining an area to be superimposed of an image in a scene picture according to an embodiment of the present application.
  • the apparatus for determining an area to be superimposed of an image in a scene picture as shown in FIG. 6 includes an obtaining module 610, a determining module 620 receiving module 630, and a matching module 640.
  • the obtaining module 610 is configured to acquire a scene picture.
  • the determining module 620 can be configured to determine an area to be superimposed in the scene picture.
  • the receiving module 630 can be configured to receive an image to be superimposed based on an update request of the user.
  • the update request may include, for example, identity information of the image to be superimposed.
  • the matching module 640 can be configured to determine an area to be superimposed that matches the image to be superimposed based on the identity information.
  • the determining module 620 may be further configured to: divide the scene picture into a plurality of candidate superimposed regions; satisfy the predetermined condition based on the candidate superimposed region, and use the candidate superimposed region as the to-be-superimposed region; Add identification information to the overlay area.
  • the identifier information of the area to be superimposed may include, for example, geographic location information of the area to be superimposed, size information of the area to be superimposed, and the to-be-superimposed area. Update time information.
  • the identity information of the image to be superimposed may include, for example, geographic location information of the image to be superimposed, size information of the image to be superimposed, and time information to be updated of the image to be superimposed.
  • the predetermined condition may include, for example, a desired update frequency of the candidate overlay region being higher than the predetermined frequency.
  • FIG. 7 is a schematic structural diagram 700 of an apparatus for superimposing an image in a scene picture according to an embodiment of the present application.
  • the apparatus for superimposing an image in a scene picture may include a receiving module 710 and an uploading module 720.
  • the receiving module 710 is configured to receive the identifier information of the area to be superimposed in the scene picture.
  • the identifier information of the area to be superimposed includes geographic location information of the area to be superimposed, size information of the area to be superimposed, and time information to be updated.
  • the uploading module 720 is configured to match the identification information of the to-be-superimposed area based on the identity information of the image to be superimposed, and upload the image to be superimposed to the server.
  • the identity information of the image to be superimposed includes geographic location information of the image, size information of the image, and time information of the image to be updated.
  • FIG. 8 is a schematic structural diagram 800 of a device for presenting a scene picture according to an embodiment of the present application.
  • the presentation device of the scene picture may include a receiving module 810, an obtaining module 820, an adding module 830, and a rendering module 840.
  • the receiving module 810 is configured to receive a scene picture acquisition request of the user.
  • the scene picture acquisition request may include, for example, geographic location information of a scene picture to be requested.
  • the obtaining module 820 is configured to acquire a first scene picture that matches the scene picture acquisition request.
  • the adding module 830 is configured to add an image to be superimposed to the area to be superimposed to form a second scene picture based on the scene picture including the area to be superimposed.
  • the presentation module 840 can be configured to present the second scene picture to the user.
  • FIG. 9 a block diagram of a computer system 900 suitable for use in implementing a terminal device or server of an embodiment of the present application is shown.
  • computer system 900 includes a central processing unit (CPU) 901 that can be loaded into a program in random access memory (RAM) 903 according to a program stored in read only memory (ROM) 902 or from storage portion 908. And perform various appropriate actions and processes.
  • RAM random access memory
  • ROM read only memory
  • various programs and data required for the operation of the system 900 are also stored.
  • the CPU 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904.
  • An input/output (I/O) interface 905 is also coupled to bus 904.
  • the following components are connected to the I/O interface 905: an input portion 906 including a keyboard, a mouse, etc.; an output portion 907 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 908 including a hard disk or the like. And a communication portion 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the Internet.
  • Driver 910 is also connected to I/O interface 905 as needed.
  • a removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 910 as needed so that a computer program read therefrom is installed into the storage portion 908 as needed.
  • an embodiment of the present disclosure includes a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart.
  • the computer program can be downloaded and installed from the network via the communication portion 909, and/or installed from the removable medium 911.
  • each block of the flowchart or block diagrams can represent a module, a program segment, or a portion of code that includes one or more logic for implementing the specified.
  • Functional executable instructions can also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or operation. , Or it can be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present application may be implemented by software or by hardware.
  • the described unit may also be provided in the processor, for example, as described in one of the following: a processor includes an acquisition module, a determination module reception module, and a matching module.
  • the name of these units does not constitute a limitation on the unit itself in some cases.
  • the acquisition module may also be described as a “module for acquiring a scene picture”.
  • the present application further provides a non-volatile computer storage medium, which may be a non-volatile computer storage medium included in the apparatus described in the foregoing embodiments; It may be a non-volatile computer storage medium that exists alone and is not assembled into the terminal.
  • the non-volatile computer storage medium stores one or more programs, when the one or more programs are executed by one device, causing the device to: acquire a scene picture; determine an area to be superimposed in the scene picture; An update request, receiving an image to be superimposed, wherein the update request includes identity information of the image to be superimposed; and determining an area to be superimposed that matches the image to be superimposed based on the identity information.
  • the non-volatile computer storage medium provided by the present application may further enable the device to: receive the identification information of the area to be superimposed in the scene picture, where the one or more programs are executed by one device, where the area to be superimposed
  • the identification information includes geographical location information of the area to be superimposed, size information of the area to be superimposed, and time information to be updated
  • the identity information based on the image to be superimposed matches the identification information of the area to be superimposed, and uploads an image to be superimposed to the server, where
  • the identity information of the superimposed image includes geographic location information of the image, size information of the image, and time information of the image to be updated.
  • the non-volatile computer storage medium provided by the present application may further enable the device to: receive a scene picture acquisition request of a user when the one or more programs are executed by one device, wherein the scene picture acquisition request includes a to-be-requested a geographic location information of the scene image; acquiring a first scene image that matches the scene image acquisition request; and the scene image includes a region to be superimposed, adding a matching image to be superimposed to the to-be-superimposed region to form a second scene image; and The second scene picture is rendered.
  • determining the area to be superimposed in the scene picture comprises: dividing the scene picture into a plurality of candidate superimposed areas; and selecting the predetermined condition based on the candidate superimposed area, Selecting the superimposed area as the area to be superimposed; and adding identification information to the area to be superimposed

Abstract

一种确定图像的待叠加区域、叠加图像、图片呈现方法和装置。其中,在场景图片中确定图像的待叠加区域的方法包括:获取场景图片(310);确定场景图片中的待叠加区域(320);基于用户的更新请求,接收待叠加图像(330),其中,更新请求包括待叠加图像的身份信息;以及基于身份信息确定与待叠加图像匹配的待叠加区域(340)。该方法实现了对场景图片的部分更新。

Description

确定图像的待叠加区域、叠加图像、图片呈现方法和装置
相关申请的交叉引用
本申请要求于2015年09月29日向中国国家知识产权局提交的第201510632207.5号中国专利申请的优先权,该申请的全部内容作为整体并入本申请中。
技术领域
本申请涉及计算机技术领域,具体涉及互联网技术领域,尤其涉及确定图像的待叠加区域、叠加图像、图片呈现方法和装置。
背景技术
街景图片为用户查阅真实的地理信息提供了极大的方便,街景中真实的街道、建筑等信息能够让用户足不出户就能了解到其关心地点周边的真实情况。
在现有技术中,街景图片中的图像信息都是某一时刻采集而来的固定信息,因此,在街景整体更新前,街景图片中的各个部分不能单独地更新、添加或修改。导致用户获取到的街景图片中包含的信息陈旧,可能与实际情况不相符。
发明内容
本申请的目的在于提出一种改进的确定图像的待叠加区域、叠加图像、图片呈现方法和装置,旨在解决以上背景技术部分提到的技术问题。
第一方面,本申请提供了一种在场景图片中确定图像的待叠加区域的方法,包括:获取场景图片;确定场景图片中的待叠加区域;基于用户的更新请求,接收待叠加图像,其中,更新请求包括待叠加图像的身份信息;以及基于身份信息确定与待叠加图像匹配的待叠加区 域。
在一些实施例中,确定场景图片中的待叠加区域包括:将场景图片划分为多个候选叠加区域;基于候选叠加区域满足预定条件,将候选叠加区域作为待叠加区域;以及向待叠加区域添加标识信息。
在一些实施例中,待叠加区域的标识信息包括:待叠加区域的地理位置信息、待叠加区域的尺寸信息以及待叠加区域的待更新时间信息;待叠加图像的身份信息包括:待叠加图像的地理位置信息、待叠加图像的尺寸信息以及待叠加图像的待更新时间信息。
在一些实施例中,预定条件包括:候选叠加区域的期望更新频率高于预定频率。
第二方面,本申请提供了一种在场景图片中叠加图像的方法,包括:接收场景图片中的待叠加区域的标识信息,其中,待叠加区域的标识信息包括待叠加区域的地理位置信息、待叠加区域的尺寸信息以及待更新时间信息;以及基于待叠加图像的身份信息与待叠加区域的标识信息匹配,向服务器上传待叠加图像,其中,待叠加图像的身份信息包括图像的地理位置信息、图像的尺寸信息以及图像的待更新时间信息。
第三方面,本申请提供了一种场景图片的呈现方法,包括:接收用户的场景图片获取请求,其中,场景图片获取请求包括待请求的场景图片的地理位置信息;获取与场景图片获取请求匹配的第一场景图片;基于场景图片包括待叠加区域,向待叠加区域添加与之匹配的待叠加图像以形成第二场景图片;以及向用户呈现第二场景图片。
第四方面,本申请提供了一种在场景图片中确定图像的待叠加区域的装置,包括:获取模块,配置用于获取场景图片;确定模块,配置用于确定场景图片中的待叠加区域;接收模块,配置用于基于用户的更新请求,接收待叠加图像,其中,更新请求包括待叠加图像的身份信息;以及匹配模块,配置用于基于身份信息确定与待叠加图像匹配的待叠加区域。
在一些实施例中,确定模块进一步配置用于:将场景图片划分为多个候选叠加区域;基于候选叠加区域满足预定条件,将候选叠加区 域作为待叠加区域;以及向待叠加区域添加标识信息。
在一些实施例中,待叠加区域的标识信息包括:待叠加区域的地理位置信息、待叠加区域的尺寸信息以及待叠加区域的待更新时间信息;待叠加图像的身份信息包括:待叠加图像的地理位置信息、待叠加图像的尺寸信息以及待叠加图像的待更新时间信息。
在一些实施例中,预定条件包括:候选叠加区域的期望更新频率高于预定频率。
第五方面,本申请提供了一种在场景图片中叠加图像的装置,包括:接收模块,配置用于接收场景图片中的待叠加区域的标识信息,其中,待叠加区域的标识信息包括待叠加区域的地理位置信息、待叠加区域的尺寸信息以及待更新时间信息;以及上传模块,配置用于基于待叠加图像的身份信息与待叠加区域的标识信息匹配,向服务器上传待叠加图像,其中,待叠加图像的身份信息包括图像的地理位置信息、图像的尺寸信息以及图像的待更新时间信息。
第六方面,本申请提供了一种场景图片的呈现装置,包括:接收模块,配置用于接收用户的场景图片获取请求,其中,场景图片获取请求包括待请求的场景图片的地理位置信息;获取模块,配置用于获取与场景图片获取请求匹配的第一场景图片;添加模块,配置用于基于场景图片包括待叠加区域,向待叠加区域添加与之匹配的待叠加图像以形成第二场景图片;以及呈现模块,配置用于向用户呈现第二场景图片。
本申请提供的确定图像的待叠加区域、叠加图像、图片呈现方法和装置,通过确定场景图片中的待叠加区域,并在待叠加区域中添加与之匹配的待叠加图像,可以对场景图片进行部分更新,或者在场景图片中叠加图像,从而提高场景图片更新的效率,并使得向用户呈现的场景图片更符合实际。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1是本申请可以应用于其中的示例性系统架构图;
图2是本申请实施例的在场景图片中确定图像的待叠加区域、叠加图像并呈现叠加图像后的场景图片的交互过程的示例;
图3是本申请实施例的在场景图片中确定图像的待叠加区域的方法的示意性流程图;
图4是本申请实施例的在场景图片中叠加图像的方法的示意性流程图;
图5是本申请实施例的场景图片的呈现方法的示意性流程图;
图6是本申请实施例的在场景图片中确定图像的待叠加区域的装置的示意性结构图;
图7是本申请实施例的在场景图片中叠加图像的装置的示意性结构图;
图8是本申请实施例的场景图片的呈现装置的示意性结构图;
图9是适于用来实现本申请实施例的终端设备或服务器的计算机系统的结构示意图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。
图1示出了可以应用本申请的网页生成方法或网页生成装置的实施例的示例性系统架构100。
如图1所示,系统架构100可以包括用户110和与该用户110对应的终端设备111、112、113,网络104,服务器105、服务器106、用户120以及与该用户120对应的终端设备121、122、123。网络104用以在用户110的终端设备111、112、113、服务器105、服务器106 和用户120的终端设备121、122、123之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户110可以使用终端设备111、112、113通过网络104与服务器105交互,以接收或发送消息等。
类似地,用户120可以使用终端设备121、122、123通过网络104与服务器106交互,以接收或发送消息等。
类似地,服务器105和服务器106之间也可以通过网络104进行交互,以接收或发送消息等。
终端设备111、112、113以及终端设备121、122、123上可以安装有各种通讯客户端应用,例如网页浏览器应用、街景地图应用、搜索类应用、即时通信工具、邮箱客户端、社交平台软件等。
终端设备111、112、113以及终端设备121、122、123可以是具有显示屏的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。
服务器105和服务器106可以是提供各种服务的服务器,例如向终端设备111、112、113和/或终端设备121、122、123提供街景地图的街景地图服务器,或者基于终端设备111、112、113和/或终端设备121、122、123的请求向街景地图中的相关位置添加图像的图像处理服务器。
需要说明的是,本申请实施例所提供的在场景图片中确定图像的待叠加区域的方法可以由服务器105或服务器106执行,相应地,在场景图片中确定图像的待叠加区域的装置可以设置于服务器105或服务器106中。
本申请实施例所提供的在场景图片中叠加图像的方法可以由终端设备111、112、113和/或终端设备121、122、123执行,相应地,在场景图片中叠加图像的装置可以设置于终端设备111、112、113和/或 终端设备121、122、123中。
本申请实施例所提供的场景图片的呈现方法可以由服务器105或服务器106执行,相应地,场景图片的装置可以设置于服务器105或服务器106中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
参见图2所示,为本申请实施例的在场景图片中确定图像的待叠加区域、叠加图像并呈现叠加图像后的场景图片的交互过程的示例。
本领域技术人员可以理解,出于示例和便于理解的目的,在以下描述中绘出和描述了一个或多个特定技术细节,但是本申请的实施例也可以在不具有这些特征的情况下实践。在图2所示的实施例中,以一个第一用户、一个街景地图服务器、一个图像处理服务器和一个第二用户的交互为例进行说明。
具体而言,在步骤201中,图像处理服务器从街景地图服务器中获取场景图片。
一般而言,街景地图服务器中可以存储有多个场景图片。在一些可选的实现方式中,各场景图片均可具有表征其地理位置(例如,城市、街道、门牌号等,或者经纬度)的信息。
接着,在步骤202中,图像处理服务器可确定其获取到的场景图片中的待叠加区域。
图像处理服务器可以基于预定的规则来确定场景图片中的待叠加区域。例如,在一些可选的实现方式中,图像处理服务器可以识别场景图片中所包含的街景对象(例如,大厦、路灯等对象),并将这些对象所处的区域作为待叠加区域。
接着,在步骤203中,第二客户端获取待叠加区域的标识信息。
在这里,标识信息可以是能够一一对应地确定出待叠加区域的位置的信息。
接着,在步骤204中,第二客户端判断待叠加区域与待叠加图像是否匹配。
在这里,“匹配”一词的含义例如可以是,待叠加区域适于添加这 些待叠加图像。
接着,在步骤205中,若待叠加区域与待叠加图像匹配,则第二客户端可向图像处理服务器发送更新请求。
在一些可选的实现方式中,在第二客户端向图像处理服务器发送更新请求时,还可以将待叠加图像一并发送至图像处理服务器中。
接着,在步骤206中,图像处理服务器确定与待叠加图像匹配的待叠加区域。
在一些应用场景中,如步骤204所示,可以由第二客户端来判断待叠加区域与待叠加图像是否匹配。或者,在另一些应用场景中,如步骤206所示,也可以由图像处理服务器来判断待叠加区域与待叠加图像是否匹配。或者,在另一些应用场景中,可以先由第二客户端基于一定的判断准则和/或条件来判断待叠加区域与待叠加图像是否匹配,再由图像处理服务器基于与第二客户端相同或不同的判断准则和/或条件来进一步判断待叠加区域与待叠加图像是否匹配。
接着,在步骤207中,当第一客户端需要获取场景图片时,其可向街景地图服务器发送场景图片获取请求。
在一些应用场景中,由于街景地图服务器上可能存储有多个不同的场景图片,而第一客户端期望获取的,仅仅是这些多个不同的场景图片中的一幅或者是一部分。因此,在这些应用场景中,第一客户端发送的场景图片获取请求中可包含其期望获取的那部分场景图片的相关信息。也即是说,街景地图服务器在接收到第一客户端发送的场景图片获取请求后,可以根据该场景图片获取请求,在存储所有场景图片的区域(例如用于存储场景图片的数据库中)查找到这些第一客户端期望获取到的场景图片。
接着,在步骤208中,街景服务器获取与第一客户端发送的场景图片获取请求对应的第一场景图片。例如,在一些应用场景中,街景服务器可以从存储场景图片的数据库中,基于第一客户端发送的场景图片获取请求来检索并获取对应的第一场景图片。
接着,在步骤209中,街景地图服务器从图像处理服务器中获取与第一场景图片匹配的待叠加图像。
例如,在一些应用场景中,街景地图服务器可以基于第一场景图片的相关信息来从图像处理服务器中对应获取与之匹配的待叠加图像。
接着,在步骤210中,街景地图服务器基于第一场景图片和与第一场景图片匹配的待叠加图像生成第二场景图片,并将第二场景图片发送给第一客户端。
通过如上所述的交互过程,当第一客户端向街景地图服务器请求获取场景图片时,街景地图服务器可以通过判断该场景图片(第一场景图片)中是否有需要更新的区域(待叠加区域),若有,则在这些区域中叠加对应的图像并生成新的场景图片(第二场景图片),并发送给第一客户端,从而实现了场景图片中部分区域的更新。
以上对在场景图片中确定图像的待叠加区域、叠加图像并呈现叠加图像后的场景图片的交互过程进行了描述。接下来,将分别对图像处理服务器、第二客户端和街景地图服务器执行的方法进行分别描述。
图3示出了本申请实施例的在场景图片中确定图像的待叠加区域的方法的示意性流程图300。在本实施例中,在场景图片中确定图像的待叠加区域的方法运行于其上的电子设备(例如图2所示的图像处理服务器)可以通过有线连接方式或者无线连接方式从客户端和/或其它服务器(例如图2所示的街景地图服务器)获取相关的信息。需要指出的是,上述无线连接方式可以包括但不限于3G/4G连接、WiFi连接、蓝牙连接、WiMAX连接、Zigbee连接、UWB(ultra wideband)连接、以及其他现在已知或将来开发的无线连接方式。
在步骤310中,获取场景图片。在一些可选的实现方式中,例如,可以从存储场景图片的存储位置(例如,存储场景图片的数据库中)获取场景图片。
接着,在步骤320中,确定场景图片中的待叠加区域。在一些可选的实现方式中,例如,可以采用如上所述步骤202中类似的方式来确定场景图片中的待叠加区域。
接着,在步骤330中,基于用户的更新请求,接收待叠加图像。在这里,更新请求包括待叠加图像的身份信息。
在这里,用户例如可以是图2中与第二客户端对应的用户。
接着,在步骤340中,基于身份信息确定与待叠加图像匹配的待叠加区域。
在一些可选的实现方式中,步骤320中的确定场景图片中的待叠加区域可以通过如下所述的方式进行。
具体而言,在步骤321中,将场景图片划分为多个候选叠加区域。例如,在一些可选的实现方式中,可以采用图像识别的技术,将场景图片按照该场景图片中所包含的各街景对象的轮廓进行划分。
接着,在步骤322中,基于候选叠加区域满足预定条件,将候选叠加区域作为待叠加区域。
接着,在步骤323中,向待叠加区域添加标识信息。
在一些可选的实现方式中,待叠加区域的标识信息例如可以包括待叠加区域的地理位置信息、待叠加区域的尺寸信息以及待叠加区域的待更新时间信息。
类似地,待叠加图像的身份信息例如可以包括待叠加图像的地理位置信息、待叠加图像的尺寸信息以及待叠加图像的待更新时间信息。
在一些可选的实现方式中,预定条件例如可以包括:候选叠加区域的期望更新频率高于预定频率。
图4是本申请实施例的在场景图片中叠加图像的方法的示意性流程图400。在一些可选的实现方式中,本实施例的在场景图片中叠加图像的方法可以运行于客户端(例如,图2中的第二客户端)上。
具体而言,在步骤410中,接收场景图片中的待叠加区域的标识信息。在一些可选的实现方式中,待叠加区域的标识信息例如可以包括待叠加区域的地理位置信息、待叠加区域的尺寸信息以及待更新时间信息。
接着,在步骤420中,基于待叠加图像的身份信息与待叠加区域的标识信息匹配,向服务器上传待叠加图像。在一些可选的实现方式中,与待叠加区域的标识信息类似,待叠加图像的身份信息也可以包括图像的地理位置信息、图像的尺寸信息以及图像的待更新时间信息。
图5是本申请实施例的场景图片的呈现方法的示意性流程图500。 在一些可选的实现方式中,本实施例的场景图片的呈现方法可以运行于服务器(例如,图2中的街景地图服务器)上。
具体而言,在步骤510中,接收用户的场景图片获取请求,其中,场景图片获取请求包括待请求的场景图片的地理位置信息。
接着,在步骤520中,获取与场景图片获取请求匹配的第一场景图片。
接着,在步骤530中,基于场景图片包括待叠加区域,向待叠加区域添加与之匹配的待叠加图像以形成第二场景图片。
接着,在步骤540中,向用户(例如,图2中与第一客户端对应的用户)呈现第二场景图片。
这样一来,在一些应用场景中,用户在请求获取场景图片时,若场景图片中有一部分图像更新,服务器可以将这部分更新的图像叠加至场景图片中的对应位置,并将叠加后的场景图片向用户呈现。
图6是本申请实施例的在场景图片中确定图像的待叠加区域的装置的示意性结构图600。
如图6所示的在场景图片中确定图像的待叠加区域的装置包括获取模块610、确定模块620接收模块630和匹配模块640。
其中,获取模块610可配置用于获取场景图片。
确定模块620可配置用于确定场景图片中的待叠加区域。
接收模块630可配置用于基于用户的更新请求,接收待叠加图像,在一些可选的实现方式中,更新请求例如可以包括待叠加图像的身份信息。
匹配模块640可配置用于基于身份信息确定与待叠加图像匹配的待叠加区域。
在一些可选的实现方式中,确定模块620还可以进一步配置用于:将场景图片划分为多个候选叠加区域;基于候选叠加区域满足预定条件,将候选叠加区域作为待叠加区域;以及向待叠加区域添加标识信息。
在可选的实现方式中,待叠加区域的标识信息例如可以包括待叠加区域的地理位置信息、待叠加区域的尺寸信息以及待叠加区域的待 更新时间信息。
类似地,待叠加图像的身份信息例如可以包括待叠加图像的地理位置信息、待叠加图像的尺寸信息以及待叠加图像的待更新时间信息。
在一些可选的实现方式中,预定条件例如可以包括候选叠加区域的期望更新频率高于预定频率。
图7是本申请实施例的在场景图片中叠加图像的装置的示意性结构图700。
如图7所示,在场景图片中叠加图像的装置可以包括接收模块710和上传模块720。
其中,接收模块710可配置用于接收场景图片中的待叠加区域的标识信息。在一些可选的实现方式中,待叠加区域的标识信息包括待叠加区域的地理位置信息、待叠加区域的尺寸信息以及待更新时间信息。
上传模块720可配置用于基于待叠加图像的身份信息与待叠加区域的标识信息匹配,向服务器上传待叠加图像。在一些可选的实现方式中,待叠加图像的身份信息包括图像的地理位置信息、图像的尺寸信息以及图像的待更新时间信息。
图8是本申请实施例的场景图片的呈现装置的示意性结构图800。
如图8所示,场景图片的呈现装置可以包括接收模块810、获取模块820、添加模块830和呈现模块840。
其中,接收模块810可配置用于接收用户的场景图片获取请求。在一些可选的实现方式中,场景图片获取请求例如可以包括待请求的场景图片的地理位置信息。
获取模块820可配置用于获取与场景图片获取请求匹配的第一场景图片。
添加模块830可配置用于基于场景图片包括待叠加区域,向待叠加区域添加与之匹配的待叠加图像以形成第二场景图片。
呈现模块840可配置用于向用户呈现第二场景图片。
下面参考图9,其示出了适于用来实现本申请实施例的终端设备或服务器的计算机系统900的结构示意图。
如图9所示,计算机系统900包括中央处理单元(CPU)901,其可以根据存储在只读存储器(ROM)902中的程序或者从存储部分908加载到随机访问存储器(RAM)903中的程序而执行各种适当的动作和处理。在RAM 903中,还存储有系统900操作所需的各种程序和数据。CPU 901、ROM 902以及RAM 903通过总线904彼此相连。输入/输出(I/O)接口905也连接至总线904。
以下部件连接至I/O接口905:包括键盘、鼠标等的输入部分906;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分907;包括硬盘等的存储部分908;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分909。通信部分909经由诸如因特网的网络执行通信处理。驱动器910也根据需要连接至I/O接口905。可拆卸介质911,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器910上,以便于从其上读出的计算机程序根据需要被安装入存储部分908。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,所述计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分909从网络上被下载和安装,和/或从可拆卸介质911被安装。
附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,所述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现, 或者可以用专用硬件与计算机指令的组合来实现。
描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括获取模块、确定模块接收模块和匹配模块。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,获取模块还可以被描述为“获取场景图片的模块”。
作为另一方面,本申请还提供了一种非易失性计算机存储介质,该非易失性计算机存储介质可以是上述实施例中所述装置中所包含的非易失性计算机存储介质;也可以是单独存在,未装配入终端中的非易失性计算机存储介质。上述非易失性计算机存储介质存储有一个或者多个程序,当所述一个或者多个程序被一个设备执行时,使得所述设备:获取场景图片;确定场景图片中的待叠加区域;基于用户的更新请求,接收待叠加图像,其中,更新请求包括待叠加图像的身份信息;以及基于身份信息确定与待叠加图像匹配的待叠加区域。
本申请提供的非易失性计算机存储介质还可以在所述一个或者多个程序被一个设备执行时,使得所述设备:接收场景图片中的待叠加区域的标识信息,其中,待叠加区域的标识信息包括待叠加区域的地理位置信息、待叠加区域的尺寸信息以及待更新时间信息;以及基于待叠加图像的身份信息与待叠加区域的标识信息匹配,向服务器上传待叠加图像,其中,待叠加图像的身份信息包括图像的地理位置信息、图像的尺寸信息以及图像的待更新时间信息。
本申请提供的非易失性计算机存储介质还可以在所述一个或者多个程序被一个设备执行时,使得所述设备:接收用户的场景图片获取请求,其中,场景图片获取请求包括待请求的场景图片的地理位置信息;获取与场景图片获取请求匹配的第一场景图片;基于场景图片包括待叠加区域,向待叠加区域添加与之匹配的待叠加图像以形成第二场景图片;以及向用户呈现第二场景图片。
在一些实施例中,确定场景图片中的待叠加区域包括:将场景图片划分为多个候选叠加区域;基于候选叠加区域满足预定条件,将候 选叠加区域作为待叠加区域;以及向待叠加区域添加标识信息
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离所述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)
具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (14)

  1. 一种在场景图片中确定图像的待叠加区域的方法,其特征在于,所述方法包括:
    获取场景图片;
    确定所述场景图片中的待叠加区域;
    基于用户的更新请求,接收待叠加图像,其中,所述更新请求包括所述待叠加图像的身份信息;以及
    基于所述身份信息确定与所述待叠加图像匹配的所述待叠加区域。
  2. 根据权利要求1所述的方法,其特征在于,所述确定所述场景图片中的待叠加区域包括:
    将所述场景图片划分为多个候选叠加区域;
    基于所述候选叠加区域满足预定条件,将所述候选叠加区域作为待叠加区域;以及
    向所述待叠加区域添加标识信息。
  3. 根据权利要求2所述的方法,其特征在于:
    所述待叠加区域的标识信息包括:所述待叠加区域的地理位置信息、所述待叠加区域的尺寸信息以及所述待叠加区域的待更新时间信息;
    所述待叠加图像的身份信息包括:所述待叠加图像的地理位置信息、所述待叠加图像的尺寸信息以及所述待叠加图像的待更新时间信息。
  4. 根据权利要求2或3所述的方法,其特征在于,所述预定条件包括:
    所述候选叠加区域的期望更新频率高于预定频率。
  5. 一种在场景图片中叠加图像的方法,其特征在于,包括:
    接收所述场景图片中的待叠加区域的标识信息,其中,所述待叠加区域的标识信息包括所述待叠加区域的地理位置信息、所述待叠加区域的尺寸信息以及待更新时间信息;以及
    基于待叠加图像的身份信息与所述待叠加区域的标识信息匹配,向服务器上传所述待叠加图像,其中,所述待叠加图像的身份信息包括所述图像的地理位置信息、所述图像的尺寸信息以及所述图像的待更新时间信息。
  6. 一种场景图片的呈现方法,其特征在于,包括:
    接收用户的场景图片获取请求,其中,所述场景图片获取请求包括待请求的场景图片的地理位置信息;
    获取与所述场景图片获取请求匹配的第一场景图片;
    基于所述场景图片包括待叠加区域,向所述待叠加区域添加与之匹配的待叠加图像以形成第二场景图片;以及
    向所述用户呈现所述第二场景图片。
  7. 一种在场景图片中确定图像的待叠加区域的装置,其特征在于,所述装置包括:
    获取模块,配置用于获取场景图片;
    确定模块,配置用于确定所述场景图片中的待叠加区域;
    接收模块,配置用于基于用户的更新请求,接收待叠加图像,其中,所述更新请求包括所述待叠加图像的身份信息;以及
    匹配模块,配置用于基于所述身份信息确定与所述待叠加图像匹配的所述待叠加区域。
  8. 根据权利要求7所述的装置,其特征在于,所述确定模块进一步配置用于:
    将所述场景图片划分为多个候选叠加区域;
    基于所述候选叠加区域满足预定条件,将所述候选叠加区域作为 待叠加区域;以及
    向所述待叠加区域添加标识信息。
  9. 根据权利要求8所述的装置,其特征在于:
    所述待叠加区域的标识信息包括:所述待叠加区域的地理位置信息、所述待叠加区域的尺寸信息以及所述待叠加区域的待更新时间信息;
    所述待叠加图像的身份信息包括:所述待叠加图像的地理位置信息、所述待叠加图像的尺寸信息以及所述待叠加图像的待更新时间信息。
  10. 根据权利要求8或9所述的装置,其特征在于,所述预定条件包括:
    所述候选叠加区域的期望更新频率高于预定频率。
  11. 一种在场景图片中叠加图像的装置,其特征在于,包括:
    接收模块,配置用于接收所述场景图片中的待叠加区域的标识信息,其中,所述待叠加区域的标识信息包括所述待叠加区域的地理位置信息、所述待叠加区域的尺寸信息以及待更新时间信息;以及
    上传模块,配置用于基于待叠加图像的身份信息与所述待叠加区域的标识信息匹配,向服务器上传所述待叠加图像,其中,所述待叠加图像的身份信息包括所述图像的地理位置信息、所述图像的尺寸信息以及所述图像的待更新时间信息。
  12. 一种场景图片的呈现装置,其特征在于,包括:
    接收模块,配置用于接收用户的场景图片获取请求,其中,所述场景图片获取请求包括待请求的场景图片的地理位置信息;
    获取模块,配置用于获取与所述场景图片获取请求匹配的第一场景图片;
    添加模块,配置用于基于所述场景图片包括待叠加区域,向所述 待叠加区域添加与之匹配的待叠加图像以形成第二场景图片;以及
    呈现模块,配置用于向所述用户呈现所述第二场景图片。
  13. 一种设备,包括:
    处理器;和
    存储器,
    所述存储器中存储有能够被所述处理器执行的计算机可读指令,在所述计算机可读指令被执行时,所述处理器执行权利要求1至6中任一项所述的方法。
  14. 一种非易失性计算机存储介质,所述非易失性计算机存储介质存储有能够被处理器执行的计算机可读指令,当所述计算机可读指令被处理器执行时,所述处理器执行权利要求1至6中任一项所述的方法。
PCT/CN2015/097585 2015-09-29 2015-12-16 确定图像的待叠加区域、叠加图像、图片呈现方法和装置 WO2017054327A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020177021630A KR20170102517A (ko) 2015-09-29 2015-12-16 이미지의 중첩될 영역을 확정하는 방법 및 장치, 이미지 중첩 방법 및 장치, 이미지 표시 방법 및 장치
EP15905238.0A EP3242225B1 (en) 2015-09-29 2015-12-16 Method and apparatus for determining region of image to be superimposed, superimposing image and displaying image
JP2017541282A JP6644800B2 (ja) 2015-09-29 2015-12-16 画像の重畳しようとする領域の決定、画像重畳、画像表示方法及び装置
US15/549,081 US10380748B2 (en) 2015-09-29 2015-12-16 Method and apparatus for determining to-be-superimposed area of image, superimposing image and presenting picture

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510632207.5 2015-09-29
CN201510632207.5A CN105243119B (zh) 2015-09-29 2015-09-29 确定图像的待叠加区域、叠加图像、图片呈现方法和装置

Publications (1)

Publication Number Publication Date
WO2017054327A1 true WO2017054327A1 (zh) 2017-04-06

Family

ID=55040767

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/097585 WO2017054327A1 (zh) 2015-09-29 2015-12-16 确定图像的待叠加区域、叠加图像、图片呈现方法和装置

Country Status (6)

Country Link
US (1) US10380748B2 (zh)
EP (1) EP3242225B1 (zh)
JP (1) JP6644800B2 (zh)
KR (1) KR20170102517A (zh)
CN (1) CN105243119B (zh)
WO (1) WO2017054327A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10606884B1 (en) * 2015-12-17 2020-03-31 Amazon Technologies, Inc. Techniques for generating representative images
CN106713840B (zh) * 2016-06-28 2018-09-04 腾讯科技(深圳)有限公司 虚拟信息显示方法及装置
CN106991404B (zh) * 2017-04-10 2019-06-28 山东师范大学 基于众源地理数据的地表覆盖更新方法及系统
CN110019608B (zh) * 2017-11-16 2022-08-05 腾讯科技(深圳)有限公司 一种信息采集方法、装置和系统及存储设备
CN108597034B (zh) * 2018-04-28 2022-11-01 百度在线网络技术(北京)有限公司 用于生成信息的方法和装置
US10504264B1 (en) * 2018-11-06 2019-12-10 Eric Koenig Method and system for combining images
CN110174686B (zh) * 2019-04-16 2021-09-24 百度在线网络技术(北京)有限公司 一种众包地图中gnss位置与图像的匹配方法、装置及系统
CN112308939B (zh) * 2020-09-14 2024-04-16 北京沃东天骏信息技术有限公司 图像生成方法及装置
CN116772803B (zh) * 2023-08-24 2024-02-09 陕西德鑫智能科技有限公司 一种无人机探测方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102543038A (zh) * 2010-12-30 2012-07-04 上海博泰悦臻电子设备制造有限公司 显示方法及显示装置
US20150002539A1 (en) * 2013-06-28 2015-01-01 Tencent Technology (Shenzhen) Company Limited Methods and apparatuses for displaying perspective street view map
CN104596523A (zh) * 2014-06-05 2015-05-06 腾讯科技(深圳)有限公司 一种街景目的地引导方法和设备
CN104657206A (zh) * 2015-02-09 2015-05-27 青岛海信移动通信技术股份有限公司 一种图像数据的处理方法和装置
CN104915432A (zh) * 2015-06-18 2015-09-16 百度在线网络技术(北京)有限公司 街景图像的获取方法和获取装置

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4144683B2 (ja) * 1999-05-25 2008-09-03 クラリオン株式会社 ナビゲーション装置及び方法並びにナビゲーション用ソフトウェアを記録した記録媒体
JP2003173356A (ja) * 2001-12-05 2003-06-20 Nippon Telegr & Teleph Corp <Ntt> 検索結果表示装置および方法と検索結果表示プログラムと該プログラムを記録したコンピュータ読取り可能な記録媒体
US20070210937A1 (en) 2005-04-21 2007-09-13 Microsoft Corporation Dynamic rendering of map information
CN101275854A (zh) * 2007-03-26 2008-10-01 日电(中国)有限公司 更新地图数据的方法和设备
CA2705019A1 (en) * 2007-11-07 2009-05-14 Tele Atlas B.V. Method of and arrangement for mapping range sensor data on image sensor data
US20100004995A1 (en) * 2008-07-07 2010-01-07 Google Inc. Claiming Real Estate in Panoramic or 3D Mapping Environments for Advertising
CN101923709B (zh) * 2009-06-16 2013-06-26 日电(中国)有限公司 图像拼接方法与设备
US9166726B2 (en) * 2011-04-20 2015-10-20 Nec Corporation Diverging device with OADM function and wavelength division multiplexing optical network system and method therefor
CN104050177B (zh) * 2013-03-13 2018-12-28 腾讯科技(深圳)有限公司 街景生成方法及服务器
US10360246B2 (en) * 2013-05-20 2019-07-23 Tencent Technology (Shenzhen) Co., Ltd. Method, system, and apparatus for searching and displaying user generated content
US20140372841A1 (en) * 2013-06-14 2014-12-18 Henner Mohr System and method for presenting a series of videos in response to a selection of a picture
CN103761274B (zh) * 2014-01-09 2017-03-01 深圳先进技术研究院 以全景摄像机对街景数据库进行更新的方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102543038A (zh) * 2010-12-30 2012-07-04 上海博泰悦臻电子设备制造有限公司 显示方法及显示装置
US20150002539A1 (en) * 2013-06-28 2015-01-01 Tencent Technology (Shenzhen) Company Limited Methods and apparatuses for displaying perspective street view map
CN104596523A (zh) * 2014-06-05 2015-05-06 腾讯科技(深圳)有限公司 一种街景目的地引导方法和设备
CN104657206A (zh) * 2015-02-09 2015-05-27 青岛海信移动通信技术股份有限公司 一种图像数据的处理方法和装置
CN104915432A (zh) * 2015-06-18 2015-09-16 百度在线网络技术(北京)有限公司 街景图像的获取方法和获取装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3242225A4 *

Also Published As

Publication number Publication date
EP3242225A4 (en) 2018-07-11
JP2018513441A (ja) 2018-05-24
JP6644800B2 (ja) 2020-02-12
CN105243119B (zh) 2019-05-24
US10380748B2 (en) 2019-08-13
CN105243119A (zh) 2016-01-13
EP3242225B1 (en) 2020-05-13
KR20170102517A (ko) 2017-09-11
EP3242225A1 (en) 2017-11-08
US20180197302A1 (en) 2018-07-12

Similar Documents

Publication Publication Date Title
WO2017054327A1 (zh) 确定图像的待叠加区域、叠加图像、图片呈现方法和装置
US10735547B2 (en) Systems and methods for caching augmented reality target data at user devices
JP6569313B2 (ja) 施設特性を更新する方法、施設をプロファイリングする方法、及びコンピュータ・システム
US9558242B2 (en) Social where next suggestion
US20140343984A1 (en) Spatial crowdsourcing with trustworthy query answering
WO2017181613A1 (zh) 搜索响应方法、装置及系统
US10243752B2 (en) Social media system and method
US10018480B2 (en) Point of interest selection based on a user request
WO2017124993A1 (zh) 信息的显示方法和装置
US20100268717A1 (en) Use of mobile devices for viewing and publishing location-based user information
US11184307B2 (en) System, apparatus, method, and non-transitory computer readable medium for providing location information by transmitting an image including the location information through a chatroom
US10614621B2 (en) Method and apparatus for presenting information
WO2012126381A1 (zh) 一种用于获取与现实场景相关的共享对象的设备和方法
US20180336529A1 (en) Job posting standardization and deduplication
US11082535B2 (en) Location enabled augmented reality (AR) system and method for interoperability of AR applications
CN103945008A (zh) 一种网络信息分享的方法及装置
US20190158547A1 (en) Augmented reality platform for professional services delivery
RU2604725C2 (ru) Система и способ генерирования информации о множестве точек интереса
US20160092838A1 (en) Job posting standardization and deduplication
US20170060965A1 (en) System, server and method for managing contents based on location grouping
US20160050283A1 (en) System and Method for Automatically Pushing Location-Specific Content to Users
US10567301B2 (en) Implementation of third party services in a digital service platform
CN110365726B (zh) 通信处理方法、装置、终端及服务器
KR102464437B1 (ko) 기가 픽셀 미디어 객체 감상 및 거래를 제공하는 메타버스 기반 크로스 플랫폼 서비스 시스템
JP7336780B1 (ja) プログラム、方法、情報処理装置、システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15905238

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20177021630

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017541282

Country of ref document: JP

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2015905238

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE