CN117082226A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN117082226A
CN117082226A CN202210482384.XA CN202210482384A CN117082226A CN 117082226 A CN117082226 A CN 117082226A CN 202210482384 A CN202210482384 A CN 202210482384A CN 117082226 A CN117082226 A CN 117082226A
Authority
CN
China
Prior art keywords
image
images
dimensional model
determining
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210482384.XA
Other languages
Chinese (zh)
Inventor
孙曦
张晟
林美霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210482384.XA priority Critical patent/CN117082226A/en
Publication of CN117082226A publication Critical patent/CN117082226A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides an image processing method, an image processing device and electronic equipment, wherein the method comprises the following steps: receiving a plurality of first images which are sent by terminal equipment and are associated with a first area, wherein the first area is an area where the terminal equipment is located; acquiring at least one second image corresponding to each first image in a database, wherein the image similarity between each first image and the corresponding at least one second image is greater than or equal to a first threshold value; and determining a three-dimensional model of the first area based on the position indicated by the second image, and sending the three-dimensional model to the terminal equipment. And the accuracy of constructing the three-dimensional model is improved.

Description

Image processing method and device and electronic equipment
Technical Field
The disclosure relates to the technical field of computer vision, and in particular relates to an image processing method, an image processing device and electronic equipment.
Background
The search image related to the query image can be searched in the database through the image search technology, and then the corresponding three-dimensional model is constructed according to the search image.
Currently, a server may obtain a search image related to a query image according to image features of the query image. For example, a global feature vector of the query image is obtained, and then an image visually similar to the query image is obtained in a database according to the global feature vector, and then a three-dimensional model is constructed according to a plurality of images visually similar. However, when the feature similarity of different regions is high, the search image acquired based on the image features of the query image may be an image of the different regions, so that the search image cannot construct an accurate three-dimensional model, and the accuracy of the three-dimensional model is low.
Disclosure of Invention
The disclosure provides an image processing method, an image processing device and electronic equipment, which are used for solving the technical problem of lower construction accuracy of a three-dimensional model in the prior art.
In a first aspect, the present disclosure provides an image processing method, the method comprising:
receiving a plurality of first images which are sent by terminal equipment and are associated with a first area, wherein the first area is an area where the terminal equipment is located;
acquiring at least one second image corresponding to each first image in a database, wherein the image similarity between each first image and the corresponding at least one second image is greater than or equal to a first threshold value;
determining a three-dimensional model of the first region based on the location indicated by the second image;
and sending the three-dimensional model to the terminal equipment.
In a second aspect, the present disclosure provides an image processing apparatus, including a receiving module, an acquiring module, a determining module, and a transmitting module, wherein:
the receiving module is used for receiving a plurality of first images which are sent by the terminal equipment and are associated with a first area, wherein the first area is the area where the terminal equipment is located;
the acquisition module is used for acquiring at least one second image corresponding to each first image in the database, and the image similarity between each first image and the corresponding at least one second image is larger than or equal to a first threshold value;
The determining module is used for determining a three-dimensional model of the first area based on the position indicated by the second image;
the sending module is used for sending the three-dimensional model to the terminal equipment.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform the image processing method as described above in the first aspect and various possible aspects of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the image processing method as described in the first aspect and the various possible aspects of the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements the image processing method as described above in the first aspect and the various possible aspects of the first aspect.
The disclosure provides an image processing method, an image processing device and electronic equipment, wherein a server receives a plurality of first images which are sent by terminal equipment and are associated with a first area, the first area is the area where the terminal equipment is located, at least one second image corresponding to each first image is obtained in a database, the image similarity between each first image and the corresponding at least one second image is larger than or equal to a first threshold value, a three-dimensional model of the first area is determined based on the position indicated by the second image, and the three-dimensional model is sent to the terminal equipment. According to the method, the server can acquire at least one second image which is visually similar to each of the plurality of first images from the database according to the image characteristics of the plurality of first images, and construct a three-dimensional model by using part of the second images based on the position indicated by the second images, so that when the second images are images of different areas, the server can accurately construct the three-dimensional model, and the accuracy of the three-dimensional model is improved.
Drawings
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of an image processing method according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a process of a server receiving a plurality of first images according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of a process for acquiring multiple location sets according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart of a method for constructing a three-dimensional model according to an embodiment of the disclosure;
fig. 6 is a process schematic diagram of an image processing method according to an embodiment of the disclosure;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure; and
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
For ease of understanding, concepts related to the embodiments of the present disclosure will be first described.
Terminal equipment: is a device with wireless receiving and transmitting function. The terminal device may be deployed on land, including indoors or outdoors, hand-held, wearable or vehicle-mounted; can also be deployed on the water surface (such as a ship, etc.). The terminal device may be a mobile phone (mobile phone), a tablet computer (Pad), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control (industrial control), a vehicle-mounted terminal device, a wireless terminal in unmanned driving (self driving), a wireless terminal in remote medical (remote medical), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation security (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), a wearable terminal device, or the like. The terminal device according to the embodiments of the present disclosure may also be referred to as a terminal, a User Equipment (UE), an access terminal device, a vehicle terminal, an industrial control terminal, a UE unit, a UE station, a mobile station, a remote terminal device, a mobile device, a UE terminal device, a wireless communication device, a UE proxy, or a UE apparatus, etc. The terminal device may also be fixed or mobile.
In the related art, the server may obtain a search image related to the query image from the database according to one query image, and further construct a corresponding three-dimensional model according to the search image. Currently, a server may acquire a search image that is visually similar to a query image based on image features of the query image. For example, the server may obtain a search image visually similar to the query image in the database according to the global feature vector of the query image and the global feature vector of each image in the database, and further construct a three-dimensional model according to the search image. However, when the building similarity of different areas is high, a plurality of search images corresponding to the query image acquired through the visual similarity may be images of different areas, which results in lower accuracy of the constructed three-dimensional model.
In order to solve the technical problem of lower accuracy of a three-dimensional model in the related art, the disclosure provides an image processing method, a server receives a plurality of second images which are sent by a terminal device and are associated with a first area where the terminal device is located, and acquires at least one second image corresponding to each first image in a database, wherein the similarity of the images between the first image and the corresponding at least one second image is greater than or equal to a first threshold value, the positions are divided into a plurality of position sets based on the positions indicated by the second images, the distance between any two positions in the position sets is less than or equal to a second threshold value, and the three-dimensional model of the first area is determined according to the plurality of position sets. In this way, the server can acquire a plurality of second images similar to each first image in vision from the database according to the image characteristics of the plurality of first images, and when the second images are images of different areas, the server can select the second images of the same area to construct a three-dimensional model according to the positions indicated by the second images, so that the accuracy of the three-dimensional model is improved.
Next, an application scenario of the embodiment of the present disclosure will be described with reference to fig. 1.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present disclosure. Referring to fig. 1, the system includes a terminal device, a server and a database. The terminal device acquires a plurality of first images and sends the plurality of first images to the server. The server acquires a plurality of second images with the similarity with the first images being greater than or equal to a first threshold value from the database according to the plurality of first images. And the server determines a three-dimensional model according to the positions of the plurality of second images and sends the three-dimensional model to the terminal equipment. When the terminal equipment receives the three-dimensional model, the three-dimensional model can be displayed in a display page. In this way, the server can accurately acquire a plurality of second images similar to each first image in vision in the database, and can select the second images in the same area to construct a three-dimensional model according to a plurality of positions of the second images, so that the images similar to the first images but different in position are prevented from participating in the construction of the three-dimensional model in the first area, the complexity of model construction is further reduced, and the accuracy of the three-dimensional model is improved.
The following describes the technical solutions of the present disclosure and how the technical solutions of the present disclosure solve the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the disclosure. Referring to fig. 2, the method may include:
s201, receiving a plurality of first images which are sent by the terminal equipment and are associated with the first area.
The execution subject of the embodiments of the present disclosure may be a server, or may be an image processing apparatus provided in the server. The image processing device may be implemented by software, or the image processing device may be implemented by a combination of software and hardware.
Optionally, the first area is an area where the terminal device is located. For example, if the terminal device is located in building 1 of the mall, the first area is an area where building 1 is located; if the terminal equipment is located in building 5 of the mall, the first area is an area where building 5 is located. Optionally, the plurality of first images are looking-around images of the first region. For example, the first image may be an image photographed by the terminal device looking around a circle in the first area. For example, an image taken by a user looking around using a mobile phone may be the first image.
Alternatively, the terminal device may send a data processing request including a plurality of first images to the server, so that the server acquires the plurality of first images. For example, after a user captures a plurality of first images using a terminal device, the user clicks a send button of the terminal device, the terminal device generates a data processing request, and sends the data processing request to a server.
Next, a process of the server receiving a plurality of first images will be described in detail with reference to fig. 3.
Fig. 3 is a schematic diagram of a process of receiving a plurality of first images by a server according to an embodiment of the disclosure. Please refer to fig. 3, which includes a terminal device and a server. The display page of the terminal device displays a first image A, a first image B, a first image C and a first image D which are shot around a circle. When a user clicks a sending control in a display page, the terminal equipment generates a data processing request and sends the data processing request to the server so that the server can acquire a first image shot by the terminal equipment. The data processing request comprises a first image A, a first image B, a first image C and a first image D.
S202, at least one second image corresponding to each first image is acquired in a database.
Optionally, the second image is a related image of the first image. For example, the image similarity between the first image and the corresponding at least one second image is greater than or equal to a first threshold. For example, if the first image acquired by the server includes a first image a and a first image B, the server acquires a second image having an image similarity with the first image a greater than or equal to a first threshold value in the database, and acquires a second image having an image similarity with the first image B greater than or equal to the first threshold value in the database.
Optionally, for any one first image, at least one second image corresponding to the first image may be acquired in the database according to the following possible implementation manner: and acquiring a first global image of the first image, acquiring a second global image of each image in the database, and acquiring at least one second image corresponding to the first image according to the first global feature and the plurality of second global features. Wherein the first global feature is used to indicate an image feature of the first image. The second global feature is used to indicate an image feature of each image in the database.
Alternatively, the first global feature of the first image and the second global feature of each image in the database may be obtained according to a preset model. For example, the first image and the images in the database are input into a preset model, which may output a first global feature of the first image and a second global feature of the images in the database. The preset model is learned according to a plurality of groups of samples, and each group of samples comprises a sample image and sample image characteristics. For example, a sample image and sample image features corresponding to the sample image are obtained, a group of samples are obtained, the group of samples comprise the sample image and the sample image features, multiple groups of samples can be obtained according to the method, the preset model is trained through the multiple groups of samples, and when the training of the preset model is completed, the first global features and the second global features can be obtained through the preset model. For example, the sets of samples may be as shown in table 1:
TABLE 1
Multiple sets of samples Sample image Sample image features
A first set of samples Sample image 1 Sample image feature 1
A first set of samples Sample image 2 Sample image feature 2
A first set of samples Sample image 3 Sample image feature 3
…… …… ……
It should be noted that table 1 illustrates a plurality of sets of samples by way of example only, and is not limited to the plurality of sets of samples.
For example, if the first image input to the preset model is the sample image 1, the first global feature corresponding to the first image is the sample image feature 1; if the first image input to the preset model is the sample image 2, the first global feature corresponding to the first image is the sample image feature 2; if the first image input to the preset model is the sample image 3, the first global feature corresponding to the first image is the sample image feature 3. Optionally, the method for acquiring the second global feature of the image in the database through the preset model is the same as the method for acquiring the first global feature, which is not described in detail in the present disclosure.
Optionally, at least one second image corresponding to the first image is obtained according to the first global feature and the plurality of second global features, specifically: a first similarity between the first global feature and each of the second global features is obtained. Alternatively, the first similarity may be obtained according to the following possible implementation manner: and determining cosine similarity and/or Euclidean distance between the first global feature and the second global feature, and determining the first similarity according to the cosine similarity and/or Euclidean distance. For example, the server determines cosine similarity between the first global feature and the second global feature as the first similarity, and the server may also determine euclidean distance between the first global feature and the second global feature as the first similarity. For example, the server may determine the weight of the cosine similarity and the weight of the euclidean distance, and further determine the first similarity according to the cosine similarity between the first global feature and the second global feature, the weight of the cosine similarity, the euclidean distance, and the weight of the euclidean distance.
And if the first similarity is greater than or equal to a first threshold, determining the image corresponding to the second global feature as a second image. For example, if a first similarity between the first image and the plurality of images in the database is greater than or equal to a first threshold, the plurality of images is determined to be the second image. For example, if a first similarity between a second global feature of image a in the database and a first global feature of the first image is greater than a first threshold, determining image a and image B as second images.
S203, determining a three-dimensional model of the first area based on the position indicated by the second image.
Alternatively, the position indicated by the second image may be a shooting position of the second image. For example, if the second image is taken at position a, the position indicated by the second image is position a, and if the second image is taken at position B, the position indicated by the second image is position B. Alternatively, the database may store the location of each image. For example, when the terminal device captures an image, the terminal device may store the capturing position of the image in the image information of each image, and further when the database stores a plurality of images, the position of each image may also be stored.
The three-dimensional model of the first region may be determined according to the following possible implementation: the locations are divided into a plurality of sets of locations. Wherein the distance between any two positions in the set of positions is less than or equal to a second threshold. For example, if the set of positions includes a position a, a position B, and a position C, a distance between the position a and the position B is less than or equal to a second threshold, a distance between the position a and the position C is less than or equal to the second threshold, and a distance between the position B and the position C is less than or equal to the second threshold.
Alternatively, the plurality of positions may be processed by a preset algorithm to divide the plurality of positions into a plurality of position sets. For example, the plurality of positions are processed through a clustering algorithm, and the plurality of positions are further divided into different types, wherein the distance between any two positions in each type is smaller than or equal to a second threshold value.
Alternatively, the area where the plurality of positions are located may be divided into a plurality of grids, and the plurality of positions may be further divided into a plurality of position sets according to the grids. For example, if the plurality of positions of the second image are all within the preset area, the preset area is divided into a plurality of (e.g. 100) grids on average, and the positions within each 9 grids (squares) are divided into the same position set. For example, if the preset area where the plurality of positions of the second image are located can be divided into a grid a, a grid B, and a grid C, the positions in the grid a are determined as a position set a, the positions in the grid B are determined as a position set B, and the positions in the grid C are determined as a position set C.
Next, a process of dividing a location into a plurality of location sets will be described with reference to fig. 4.
Fig. 4 is a schematic diagram of a process for acquiring multiple location sets according to an embodiment of the disclosure. Referring to fig. 4, a preset area is included. The preset area comprises a grid A, a grid B, a grid C and a grid D. The positions of the plurality of second images are determined in a preset area. Wherein, position A, position B, position C, position D and position E are located in grid A, position F is located in grid B, position G and position H are located in grid C, and there is no position of the second image in grid D. And obtaining a position set A, a position set B and a position set C according to the position of the second image in the grid. The position set A comprises a position A, a position B, a position C, a position D and a position E, the position set B comprises a position F, and the position set C comprises a position G and a position H.
A three-dimensional model is determined from the plurality of sets of locations. For example, the server may determine a three-dimensional model from images corresponding to a plurality of sets of locations. For example, the server may determine one set of positions from the plurality of sets of positions, and then construct a three-dimensional model according to the image corresponding to the set of positions, so that when the position of the first region is unknown, the server may accurately obtain enough second images in the database, and further accurately construct the three-dimensional model of the first region according to the second images, thereby improving accuracy of the three-dimensional model.
S204, sending the three-dimensional model to the terminal equipment.
Alternatively, after the server determines the three-dimensional model according to the plurality of location sets, the server may send the three-dimensional model to the terminal device, and when the terminal device receives the three-dimensional model, the three-dimensional model may be displayed. For example, when the user uses the terminal device to shoot a plurality of first images on building 1 (the position is unknown), and sends the plurality of first images to the server, the server can accurately search the related image of building 1 in the database according to the plurality of first images when receiving the plurality of first images, and then quickly reconstruct the three-dimensional model of building 1 according to the related image, after the server constructs the three-dimensional model of building 1, the three-dimensional model can be sent to the terminal device, and the user can view the three-dimensional model of building 1 in the terminal device.
The embodiment of the disclosure provides an image processing method, a server receives a plurality of second images which are sent by a terminal device and are associated with a first area where the terminal device is located, acquires at least one second image corresponding to each first image in a database, divides the positions into a plurality of position sets based on positions indicated by the second images, wherein the distance between any two positions in the position sets is smaller than or equal to a second threshold value, and determines a three-dimensional model of the first area according to the plurality of position sets. In this way, when the position of the first area is unknown, the server can acquire a plurality of second images similar to each first image in vision from the database according to the image characteristics of the plurality of first images, and the server can select the second images of the same area to determine the three-dimensional model, so that the images similar to the first images but different in position are prevented from participating in the construction of the three-dimensional model of the first area, the complexity of model construction is further reduced, and the accuracy of the three-dimensional model is improved.
On the basis of the embodiment shown in fig. 2, a method of determining a three-dimensional model of the first region based on a plurality of positions of a plurality of second images in the image processing method shown in fig. 2 will be further described with reference to fig. 5.
Fig. 5 is a flowchart of a method for determining a three-dimensional model according to an embodiment of the present disclosure. Referring to fig. 5, the method includes:
s501, dividing the position into a plurality of position sets.
Optionally, the distance between any two positions in the set of positions is less than or equal to the second threshold. Alternatively, when the plurality of position dividing is performed, the server may divide the positions of the plurality of second images corresponding to each of the first images, respectively. For example, when the first image received by the server includes the first image a and the first image B, the server may acquire a plurality of second images a corresponding to the first image a and a plurality of second images B corresponding to the first image B, and further divide the positions of the plurality of second images a into a plurality of position sets based on the positions of the plurality of second images a, and divide the positions of the plurality of second images B into a plurality of position sets based on the positions of the plurality of second images B, so that when there are more second images, the accuracy of dividing the positions by the server may be improved.
Alternatively, in performing the plurality of position divisions, the server may divide according to positions of a plurality of second images corresponding to the plurality of first images. For example, when the first image received by the server includes the first image a and the first image B, the server may acquire a plurality of second images a corresponding to the first image a and a plurality of second images B corresponding to the first image B, and further divide the plurality of positions into a set of positions based on the positions of the plurality of second images a and the positions of the plurality of second images B, so that when the number of second images is small, the processing efficiency of the server may be improved.
S502, determining a three-dimensional model according to a plurality of position sets.
Alternatively, the three-dimensional model may be determined according to the following possible implementation: a first number of locations included in each set of locations is obtained, and a three-dimensional model is determined based on the first number. Wherein the first number is the number of locations. For example, if 10 positions are included in the position set a, the first number of the position set a is 10, and if 100 positions are included in the position set B, the first number of the position set B is 100.
Optionally, if the server divides the positions of the plurality of second images corresponding to each first image respectively to obtain a plurality of position sets, the server further needs to combine the plurality of position sets, and obtains the first number in each combined position set. For example, the positions of the 10 second images corresponding to the first image a are in the position set a, and the positions of the 20 second images corresponding to the first image B are in the position set B, if the regions divided by the position set a and the position set B are the same region (for example, the position set a is the first image a divided according to the floor region No. 1, the position set B is the first image B divided according to the floor region No. 1, so that the positions in the position set a and the positions in the position set B are both the positions in the floor region No. 1), the position set a and the position set B may be combined, and the first number of the combined position sets is 30.
Optionally, determining the three-dimensional model according to the first number is specifically: a set of target locations is determined among the plurality of sets of locations based on the first number. Wherein the first number of target location sets is the largest. For example, if 10 positions are included in the position set a, 20 positions are included in the position set B, and 500 positions are included in the position set C, the position set C is determined as the target position set.
And determining a three-dimensional model according to the second image corresponding to the target position set. For example, each location in the set of target locations corresponds to a second image. For example, if the target position set corresponds to 100 images, the three-dimensional model of the first region may be determined from the 100 images, and if the target position set corresponds to 500 images, the three-dimensional model of the first region may be determined from the 500 images. The second images are obtained according to a plurality of first images of the first area, and the number of positions in the target position set is the largest, so that the correlation degree between the area divided by the target position set and the first area is the highest, and a three-dimensional model of the first area can be accurately constructed according to the second images corresponding to the target position set.
Alternatively, when determining the three-dimensional model according to the second images corresponding to the positions in the target position set, the three-dimensional model may be determined according to a part of the images in the plurality of second images corresponding to the positions in the target position set. Alternatively, the three-dimensional model may be determined according to the following possible implementation: and acquiring an M Zhang Disan image from the second image corresponding to the target position set, and determining a three-dimensional model through the M Zhang Disan image. Wherein M is a positive integer less than or equal to the first number. For example, if the set of target locations corresponds to 1000 images, then M may be 100. For example, if the set of target locations corresponds to 1000 images, the server may determine a three-dimensional model from 100 of the images.
Optionally, an M Zhang Disan image is acquired from a second image corresponding to the target position set, specifically: a target image is determined in the plurality of first images. The target image is the earliest image shot in the plurality of first images. For example, when a user shoots a plurality of first images of a first area by using a terminal device, each first image has a corresponding time stamp, and the server may acquire the earliest shot image from the plurality of first images according to the time stamps.
And determining the M Zhang Disan image according to the image similarity between the target image and each second image corresponding to the target position set. For example, the server may acquire the first 100 images having the highest image similarity with the target image from among the plurality of second images corresponding to the target position set, and determine the 100 images as the third image. For example, the server may acquire a second similarity between each of the second images corresponding to the set of target positions and the target image, and acquire 100 images having the highest second similarity with the target image among the plurality of second images corresponding to the set of target positions according to the second similarity, and determine the 100 images as the third image. Therefore, the target image is the first image shot by the user through the terminal equipment, so that the server can accurately reconstruct the three-dimensional model corresponding to the target image when reconstructing the three-dimensional model, the processing efficiency of the server is improved, and the user experience is improved.
It should be noted that, the first similarity and the second similarity according to the embodiments of the present disclosure may be cosine similarity, euclidean distance, or other parameters for determining similarity, which are not limited in the embodiments of the present disclosure.
The embodiment of the disclosure provides a method for determining a three-dimensional model, which divides a position into a plurality of position sets, acquires a first number of positions included in each position set, determines a target position set with the largest number of positions in the plurality of position sets according to the first number, and determines the three-dimensional model according to a second image corresponding to the positions in the target position set. According to the method, when the server reconstructs the three-dimensional model, the three-dimensional model corresponding to the target image can be accurately reconstructed, the processing efficiency of the server is improved, the user experience is improved, the server can select the second image with the same area to construct the three-dimensional model, the situation that the images similar to the first image but different in position participate in the construction of the three-dimensional model of the first area is avoided, the complexity of model construction is further reduced, and the accuracy of the three-dimensional model is improved.
With reference to fig. 6, the procedure of the image processing method will be described below.
Fig. 6 is a process schematic diagram of an image processing method according to an embodiment of the disclosure. Referring to fig. 6, the method includes: terminal equipment, a server and a database. The display page of the terminal device displays a first image A and a first image B which are shot around a circle. And when the user clicks a sending control in the display page, the terminal equipment sends the first image A and the first image B to the server. When the server receives the first image A and the first image B, the server acquires a plurality of second images with higher similarity in the database according to the image characteristics of the first image A and the first image B.
Referring to fig. 6, the server determines the positions of a plurality of second images in a preset area. The preset area comprises two grids, wherein one grid is located in position A, position B, position C, position D and position E, and the other grid is located in position F. The server determines a set of locations a and a set of locations B. The position set A comprises a position A, a position B, a position C, a position D and a position E, and the position set B comprises a position F.
Referring to fig. 6, the server determines a three-dimensional model according to the second image corresponding to the position set a, and sends the three-dimensional model to the terminal device, and when the terminal device receives the three-dimensional model, the three-dimensional model is displayed in the display page. Therefore, when the position of the first area is unknown, the server can accurately acquire a plurality of second images similar to the first images in vision in the database according to the image characteristics of the plurality of first images, and the server can select the second images of the same area to construct a three-dimensional model, so that the images similar to the first images but different in position are prevented from participating in the construction of the three-dimensional model of the first area, the complexity of model construction is further reduced, and the accuracy of the three-dimensional model is improved.
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. Referring to fig. 7, the image processing apparatus 70 includes a receiving module 71, an acquiring module 72, a determining module 73, and a transmitting module 74, wherein:
The receiving module 71 is configured to receive a plurality of first images sent by a terminal device and associated with a first area, where the first area is an area where the terminal device is located;
the acquiring module 72 is configured to acquire, in a database, at least one second image corresponding to each first image, where an image similarity between each first image and the corresponding at least one second image is greater than or equal to a first threshold;
the determining module 73 is configured to determine a three-dimensional model of the first region based on the position indicated by the second image;
the sending module 74 is configured to send the three-dimensional model to the terminal device.
In one possible implementation, the determining module 73 is specifically configured to:
dividing the position into a plurality of position sets, wherein the distance between any two positions in the position sets is smaller than or equal to a second threshold value;
and determining the three-dimensional model according to the plurality of position sets.
In one possible implementation, the determining module 73 is specifically configured to:
obtaining a first number of locations included in each set of locations;
determining the three-dimensional model according to the first quantity.
In one possible implementation, the determining module 73 is specifically configured to:
Determining a target position set from the plurality of position sets according to the first quantity, wherein the first quantity of the target position set is the largest;
and determining the three-dimensional model according to the second image corresponding to the position in the target position set.
In one possible implementation, the determining module 73 is specifically configured to:
acquiring M Zhang Disan images from the second images corresponding to the target position sets, wherein M is a positive integer smaller than or equal to the first number;
the three-dimensional model is determined from the M Zhang Disan image.
In one possible implementation manner, the determining module is specifically configured to:
determining a target image in the plurality of first images, wherein the target image is the earliest image shot in the plurality of first images;
and determining the M Zhang Disan image according to the image similarity between the target image and each second image corresponding to the target position set.
In one possible implementation, the obtaining module 72 is specifically configured to:
acquiring a first global feature of the first image;
acquiring a second global feature of each image in the database;
and acquiring at least one second image corresponding to the first image according to the first global feature and the plurality of second global features.
In one possible implementation, the obtaining module 72 is specifically configured to:
acquiring a first similarity between the first global feature and each second global feature;
and if the first similarity is greater than or equal to a first threshold, determining the image corresponding to the second global feature as the second image.
In one possible implementation, the obtaining module 72 is specifically configured to:
determining cosine similarity and/or Euclidean distance between the first global feature and the second global feature;
and determining the first similarity according to the cosine similarity and/or the Euclidean distance.
In one possible embodiment, the plurality of first images are looking-around images of the first region. The image processing device provided in this embodiment may be used to execute the technical solution of the foregoing method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein again.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring to fig. 8, a schematic diagram of an electronic device 800 suitable for implementing embodiments of the present disclosure is shown, where the electronic device 800 may be a terminal device or a server. The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet (Portable Android Device, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 8, the electronic device 800 may include a processing means (e.g., a central processor, a graphics processor, etc.) 801 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage 808 into a random access Memory (Random Access Memory, RAM) 803. In the RAM 803, various programs and data required for the operation of the electronic device 800 are also stored. The processing device 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
In general, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 807 including, for example, a liquid crystal display (Liquid Crystal Display, LCD for short), a speaker, a vibrator, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; communication means 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 shows an electronic device 800 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 809, or installed from storage device 808, or installed from ROM 802. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 801.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN for short) or a wide area network (Wide Area Network, WAN for short), or it may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, one or more embodiments of the present disclosure provide an image processing method, the method including:
receiving a plurality of first images which are sent by terminal equipment and are associated with a first area, wherein the first area is an area where the terminal equipment is located;
acquiring at least one second image corresponding to each first image in a database, wherein the image similarity between each first image and the corresponding at least one second image is greater than or equal to a first threshold value;
determining a three-dimensional model of the first region based on the location indicated by the second image;
and sending the three-dimensional model to the terminal equipment.
According to one or more embodiments of the present disclosure, determining a three-dimensional model of the first region based on the location indicated by the second image includes:
dividing the position into a plurality of position sets, wherein the distance between any two positions in the position sets is smaller than or equal to a second threshold value;
and determining the three-dimensional model according to the plurality of position sets.
According to one or more embodiments of the present disclosure, determining the three-dimensional model from the plurality of sets of locations includes:
obtaining a first number of locations included in each set of locations;
Determining the three-dimensional model according to the first quantity.
According to one or more embodiments of the present disclosure, determining the three-dimensional model from the first number includes:
determining a target position set from the plurality of position sets according to the first quantity, wherein the first quantity of the target position set is the largest;
and determining the three-dimensional model according to the second image corresponding to the position in the target position set.
According to one or more embodiments of the present disclosure, determining the three-dimensional model from the second image corresponding to the set of target positions includes:
acquiring M Zhang Disan images from the second images corresponding to the target position sets, wherein M is a positive integer smaller than or equal to the first number;
the three-dimensional model is determined from the M Zhang Disan image.
According to one or more embodiments of the present disclosure, acquiring an M Zhang Disan image in the second image indicated by the set of target locations includes:
determining a target image in the plurality of first images, wherein the target image is the earliest image shot in the plurality of first images;
and determining the M Zhang Disan image according to the image similarity between the target image and each second image corresponding to the target position set.
According to one or more embodiments of the present disclosure, for any one first image, acquiring at least one second image corresponding to the first image in a database includes:
acquiring a first global feature of the first image;
acquiring a second global feature of each image in the database;
and acquiring at least one second image corresponding to the first image according to the first global feature and the plurality of second global features.
According to one or more embodiments of the present disclosure, obtaining at least one second image corresponding to the first image according to the first global feature and the plurality of second global features includes:
acquiring a first similarity between the first global feature and each second global feature;
and if the first similarity is greater than or equal to a first threshold, determining the image corresponding to the second global feature as the second image.
According to one or more embodiments of the present disclosure, obtaining a first similarity between the first global feature and each of the second global features includes:
determining cosine similarity and/or Euclidean distance between the first global feature and the second global feature;
And determining the first similarity according to the cosine similarity and/or the Euclidean distance.
According to one or more embodiments of the present disclosure, the plurality of first images are looking-around images of the first region.
In a second aspect, one or more embodiments of the present disclosure provide an image processing apparatus including a receiving module, an acquiring module, a determining module, and a transmitting module, wherein:
the receiving module is used for receiving a plurality of first images which are sent by the terminal equipment and are associated with a first area, wherein the first area is the area where the terminal equipment is located;
the acquisition module is used for acquiring at least one second image corresponding to each first image in the database, and the image similarity between each first image and the corresponding at least one second image is larger than or equal to a first threshold value;
the determining module is used for determining a three-dimensional model of the first area based on the position indicated by the second image;
the sending module is used for sending the three-dimensional model to the terminal equipment.
In one possible implementation manner, the determining module is specifically configured to:
dividing the position into a plurality of position sets, wherein the distance between any two positions in the position sets is smaller than or equal to a second threshold value;
And determining the three-dimensional model according to the plurality of position sets.
In one possible implementation manner, the determining module is specifically configured to:
obtaining a first number of locations included in each set of locations;
determining the three-dimensional model according to the first quantity.
In one possible implementation manner, the determining module is specifically configured to:
determining a target position set from the plurality of position sets according to the first quantity, wherein the first quantity of the target position set is the largest;
and determining the three-dimensional model according to the second image corresponding to the position in the target position set.
In one possible implementation manner, the determining module is specifically configured to:
acquiring M Zhang Disan images from the second images corresponding to the target position sets, wherein M is a positive integer smaller than or equal to the first number;
the three-dimensional model is determined from the M Zhang Disan image.
In one possible implementation manner, the determining module is specifically configured to:
determining a target image in the plurality of first images, wherein the target image is the earliest image shot in the plurality of first images;
and determining the M Zhang Disan image according to the image similarity between the target image and each second image corresponding to the target position set.
In one possible implementation manner, the acquiring module is specifically configured to:
acquiring a first global feature of the first image;
acquiring a second global feature of each image in the database;
and acquiring at least one second image corresponding to the first image according to the first global feature and the plurality of second global features.
In one possible implementation manner, the acquiring module is specifically configured to:
acquiring a first similarity between the first global feature and each second global feature;
and if the first similarity is greater than or equal to a first threshold, determining the image corresponding to the second global feature as the second image.
In one possible implementation manner, the acquiring module is specifically configured to:
determining cosine similarity and/or Euclidean distance between the first global feature and the second global feature;
and determining the first similarity according to the cosine similarity and/or the Euclidean distance.
According to one or more embodiments of the present disclosure, the plurality of first images are looking-around images of the first region.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor and a memory;
The memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform the image processing method as described above in the first aspect and various possible aspects of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the image processing method as described in the first aspect and the various possible aspects of the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements the image processing method as described above in the first aspect and the various possible aspects of the first aspect.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (14)

1. An image processing method, comprising:
receiving a plurality of first images which are sent by terminal equipment and are associated with a first area, wherein the first area is an area where the terminal equipment is located;
Acquiring at least one second image corresponding to each first image in a database, wherein the image similarity between each first image and the corresponding at least one second image is greater than or equal to a first threshold value;
determining a three-dimensional model of the first region based on the location indicated by the second image;
and sending the three-dimensional model to the terminal equipment.
2. The method of claim 1, wherein determining the three-dimensional model of the first region based on the location indicated by the second image comprises:
dividing the position into a plurality of position sets, wherein the distance between any two positions in the position sets is smaller than or equal to a second threshold value;
and determining the three-dimensional model according to the plurality of position sets.
3. The method of claim 2, wherein determining the three-dimensional model from the plurality of sets of locations comprises:
obtaining a first number of locations included in each set of locations;
determining the three-dimensional model according to the first quantity.
4. A method according to claim 3, wherein determining the three-dimensional model from the first number comprises:
Determining a target position set from the plurality of position sets according to the first quantity, wherein the first quantity of the target position set is the largest;
and determining the three-dimensional model according to the second image corresponding to the target position set.
5. The method of claim 4, wherein determining the three-dimensional model from the second image corresponding to the set of target locations comprises:
acquiring M Zhang Disan images from the second images corresponding to the target position sets, wherein M is a positive integer smaller than or equal to the first number;
the three-dimensional model is determined from the M Zhang Disan image.
6. The method of claim 5, wherein acquiring M Zhang Disan images in the second image corresponding to the set of target locations comprises:
determining a target image in the plurality of first images, wherein the target image is the earliest image shot in the plurality of first images;
and determining the M Zhang Disan image according to the image similarity between the target image and each second image corresponding to the target position set.
7. The method according to any one of claims 1-6, wherein for any one first image, obtaining at least one second image corresponding to the first image in a database comprises:
Acquiring a first global feature of the first image;
acquiring a second global feature of each image in the database;
and acquiring at least one second image corresponding to the first image according to the first global feature and the plurality of second global features.
8. The method of claim 7, wherein acquiring at least one second image corresponding to the first image based on the first global feature and the plurality of second global features comprises:
acquiring a first similarity between the first global feature and each second global feature;
and if the first similarity is greater than or equal to a first threshold, determining the image corresponding to the second global feature as the second image.
9. The method of claim 8, wherein obtaining a first similarity between the first global feature and each of the second global features comprises:
determining cosine similarity and/or Euclidean distance between the first global feature and the second global feature;
and determining the first similarity according to the cosine similarity and/or the Euclidean distance.
10. The method of any one of claims 1-9, wherein the plurality of first images are panoramic images of the first region.
11. An image processing apparatus, comprising a receiving module, an acquiring module, a determining module, and a transmitting module, wherein:
the receiving module is used for receiving a plurality of first images which are sent by the terminal equipment and are associated with a first area, wherein the first area is the area where the terminal equipment is located;
the acquisition module is used for acquiring at least one second image corresponding to each first image in the database, and the image similarity between each first image and the corresponding at least one second image is larger than or equal to a first threshold value;
the determining module is used for determining a three-dimensional model of the first area based on the position indicated by the second image;
the sending module is used for sending the three-dimensional model to the terminal equipment.
12. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory, causing the processor to perform the image processing method according to any one of claims 1 to 10.
13. A computer-readable storage medium, in which computer-executable instructions are stored which, when executed by a processor, implement the image processing method of any one of claims 1 to 10.
14. A computer program product comprising a computer program which, when executed by a processor, implements the image processing method according to any one of claims 1 to 10.
CN202210482384.XA 2022-05-05 2022-05-05 Image processing method and device and electronic equipment Pending CN117082226A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210482384.XA CN117082226A (en) 2022-05-05 2022-05-05 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210482384.XA CN117082226A (en) 2022-05-05 2022-05-05 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN117082226A true CN117082226A (en) 2023-11-17

Family

ID=88702907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210482384.XA Pending CN117082226A (en) 2022-05-05 2022-05-05 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117082226A (en)

Similar Documents

Publication Publication Date Title
CN109584276B (en) Key point detection method, device, equipment and readable medium
WO2021037051A1 (en) Positioning method and electronic device
CN110188719B (en) Target tracking method and device
CN112288853B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, and storage medium
CN109754464B (en) Method and apparatus for generating information
CN112907628A (en) Video target tracking method and device, storage medium and electronic equipment
CN110188782B (en) Image similarity determining method and device, electronic equipment and readable storage medium
CN111586295B (en) Image generation method and device and electronic equipment
Jiao et al. A hybrid fusion of wireless signals and RGB image for indoor positioning
CN117082226A (en) Image processing method and device and electronic equipment
CN115408609A (en) Parking route recommendation method and device, electronic equipment and computer readable medium
CN113238652B (en) Sight line estimation method, device, equipment and storage medium
CN113223012B (en) Video processing method and device and electronic device
CN115393423A (en) Target detection method and device
CN116055798A (en) Video processing method and device and electronic equipment
CN110188833B (en) Method and apparatus for training a model
CN111383337B (en) Method and device for identifying objects
CN115937305A (en) Image processing method and device and electronic equipment
CN110619089B (en) Information retrieval method and device
CN110320496B (en) Indoor positioning method and device
CN113191257A (en) Order of strokes detection method and device and electronic equipment
CN112037280A (en) Object distance measuring method and device
CN116843842A (en) Three-dimensional map distributed construction method and device and electronic equipment
CN111368015B (en) Method and device for compressing map
CN112598732B (en) Target equipment positioning method, map construction method and device, medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination