CN112214624B - Method, apparatus, device and storage medium for processing image - Google Patents

Method, apparatus, device and storage medium for processing image Download PDF

Info

Publication number
CN112214624B
CN112214624B CN202011092196.3A CN202011092196A CN112214624B CN 112214624 B CN112214624 B CN 112214624B CN 202011092196 A CN202011092196 A CN 202011092196A CN 112214624 B CN112214624 B CN 112214624B
Authority
CN
China
Prior art keywords
image
acquisition
determining
images
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011092196.3A
Other languages
Chinese (zh)
Other versions
CN112214624A (en
Inventor
白国财
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011092196.3A priority Critical patent/CN112214624B/en
Publication of CN112214624A publication Critical patent/CN112214624A/en
Application granted granted Critical
Publication of CN112214624B publication Critical patent/CN112214624B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for processing images, and relates to the field of computer vision. The specific implementation scheme is as follows: acquiring a plurality of images acquired by a plurality of acquisition points in a preset road section; determining the acquisition direction of each image; clustering a plurality of images according to the acquisition direction to obtain at least one image cluster; determining the corresponding direction of each image cluster; and determining the jump relation among the images according to the direction corresponding to each image cluster and the position of each acquisition point. The realization mode does not need to bind the image with the road network, realizes the display of street views, simplifies the operation and reduces the manufacturing cost.

Description

Method, apparatus, device and storage medium for processing image
Technical Field
The present application relates to the field of computer technology, and in particular, to the field of computer vision, and more particularly, to a method, apparatus, device, and storage medium for processing images.
Background
With the continuous development of artificial intelligence in recent years, computer vision technology has been increasingly used. The requirements of many applications have shown limitations in the range of common lens angles. Therefore, the wide-angle image and the panoramic image can obtain more scene information at one time due to the ultra-large visual angle, so that the wide-angle image and the panoramic image are widely applied to the fields of security monitoring, industrial medical treatment, intelligent transportation and the like.
The wide-angle image or the panoramic image is applied to the Internet street view service, so that experience consistent with the actual street view road condition can be realized during browsing. The traditional mode is to bind the image and the road, but the road network acquisition and the manufacture require larger cost, so that the binding and the manufacture of the image and the road are complex, and the cost is high.
Disclosure of Invention
Provided are a method, apparatus, device, and storage medium for processing an image.
According to a first aspect, there is provided a method for processing an image, comprising: acquiring a plurality of images acquired by a plurality of acquisition points in a preset road section; determining the acquisition direction of each image; clustering the plurality of images according to the acquisition direction to obtain at least one image cluster; determining the corresponding direction of each image cluster; and determining the jump relation among the images according to the direction corresponding to each image cluster and the position of each acquisition point.
According to a second aspect, there is provided an apparatus for processing an image, comprising: an image acquisition unit configured to acquire a plurality of images acquired at a plurality of acquisition points within a preset road section; a first direction determining unit configured to determine a collection direction of each image; the image clustering unit is configured to cluster the plurality of images according to the acquisition direction to obtain at least one image cluster; a second direction determining unit configured to determine a direction to which each image cluster corresponds; and the jump determining unit is configured to determine the jump relation among the images according to the direction corresponding to the image clusters and the position of each acquisition point.
According to a third aspect, there is provided an electronic device for processing an image, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in the first aspect.
According to a fourth aspect, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method as described in the first aspect.
According to the technology of the application, the technical problems that an image and a road network are required to be bound and the manufacturing is complex in the existing street view display method are solved, the image and the road network are not required to be bound, the display of the street view is realized, the operation is simplified, and the manufacturing cost is reduced.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method for processing an image according to the present application;
FIG. 3 is a schematic illustration of one application scenario of a method for processing images according to the present application;
FIG. 4 is a flow chart of another embodiment of a method for processing an image according to the present application;
FIG. 5 is a schematic view of a bifurcation junction according to the embodiment shown in FIG. 4;
FIG. 6 is a schematic structural view of one embodiment of an apparatus for processing images according to the present application;
fig. 7 is a block diagram of an electronic device for implementing a method for processing images of embodiments of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for processing images or the apparatus for processing images of the present application may be applied.
As shown in fig. 1, a system architecture 100 may include a camera 101, a network 102, and a terminal device 103. The network 102 is used as a medium to provide a communication link between the camera 101 and the terminal device 103. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others.
The camera 101 may be various image capturing devices capable of capturing a wide-angle image or a panoramic image, and may transmit the captured wide-angle image or panoramic image to the terminal device 103 through the network 102 to cause the terminal device 103 to process the wide-angle image or panoramic image. The camera 101 may include a smart phone, a smart video camera, and the like.
The terminal device 103 may be various electronic devices capable of processing images. In which various types of client applications, such as image processing class applications, social platform class applications, etc., may be installed
The terminal device 103 may be hardware or software. When the terminal device 103 is hardware, it may be a variety of electronic devices including, but not limited to, tablet computers, car computers, laptop and desktop computers, and the like. When the terminal device 103 is software, it can be installed in the above-listed electronic devices. Which may be implemented as multiple software or software modules (e.g., to provide distributed services), or as a single software or software module. The present invention is not particularly limited herein.
It should be noted that the method for processing an image provided in the embodiment of the present application is generally performed by the terminal device 103. Accordingly, the means for processing the image is typically provided in the terminal device 103.
It should be understood that the number of cameras, networks and terminal devices in fig. 1 is merely illustrative. There may be any number of cameras, networks, and terminal devices as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for processing an image according to the present application is shown. The method for processing an image of the present embodiment includes the steps of:
step 201, acquiring a plurality of images acquired at a plurality of acquisition points in a preset road section.
In this embodiment, the execution subject of the method for processing an image (for example, the terminal device 103 shown in fig. 1) may acquire a plurality of images acquired at a plurality of acquisition points within a preset road section by wired connection or wireless connection. The image may be a wide-angle image or a panoramic image. Here, the wide-angle image may be an image having a viewing angle of 180 degrees or even up to 220 degrees, and the panoramic image may be an image having a viewing angle of 360 degrees. The preset road section can be a road section with a preset length or a road section with a preset position. Acquisition points can be understood as location points for acquiring images. The positions of the plurality of acquisition points can be preset or determined according to the actual conditions of the road sections. The image may be acquired by a camera at a plurality of acquisition points. The camera can be installed on the collection vehicle, and in the driving process of the collection vehicle, images are continuously collected. The camera can also be held by a user to continuously acquire images according to the walking route of the user.
Step 202, determining the acquisition direction of each image.
The execution body may also determine the acquisition direction of each image. Specifically, if the image is a panoramic image, the execution subject may set the direction in which the image with the earliest photographing time in the panoramic image points as the collection direction of the image. Alternatively, the execution subject may determine the acquisition direction of each image according to the traveling direction of the vehicle in which the camera is installed. Alternatively, the execution subject may determine the direction at the time of image capturing by a sensor (for example, a gyroscope or the like) mounted in the camera, and take this direction as the acquisition direction.
And 203, clustering the plurality of images according to the acquisition direction to obtain at least one image cluster.
After determining the acquisition direction of each image, the execution subject may cluster the plurality of images to obtain at least one image cluster. Specifically, the execution subject may cluster each image using a variety of existing clustering algorithms. It will be appreciated that at least one image may be included in each image cluster. Images with the same acquisition direction are located in the same image cluster.
Step 204, determining a direction corresponding to each image cluster.
The execution subject may also determine a direction to which each image cluster corresponds. Specifically, for each image cluster, the execution subject may average the directions of the plurality of images included in the image cluster, and the obtained value is used as the direction corresponding to the image cluster. Alternatively, the execution subject may set the direction of any one image as the direction to which the image clusters.
Step 205, determining the jump relation between the images according to the direction corresponding to the image clusters and the position of the acquisition points.
After determining the direction corresponding to each image cluster, the execution subject can determine the jump relationship between each image by combining the positions of each acquisition point. Specifically, the execution subject may first determine a skip relationship between images in the same image cluster, and then determine a skip relationship between images in the image cluster. For example, the execution subject may determine that a jump relationship exists between images corresponding to two acquisition points in the same image cluster, the acquisition directions of which are the same and which are closest to each other. And then, judging the distance between each acquisition point in the image cluster and the acquisition points with different acquisition directions, and determining a plurality of acquisition points with closest distance and different corresponding image acquisition directions. These acquisition points may be located in different image clusters. Finally, determining the jump relation between the images corresponding to the acquisition points. The skip relation is understood herein to be a skip from a previous panoramic image to a subsequent panoramic image, wherein the previous image includes an image at the acquisition point of the subsequent image. For example, the former image includes a location a therein, and the latter image is an image acquired at the location a. If the user clicks on location A in the previous image, a jump may be made to the next image.
Referring to fig. 3, a schematic diagram of one application scenario of a method for processing images according to the present application is shown. In the application scenario shown in fig. 3, the terminal device acquires a plurality of images acquired at different acquisition points A, B, C, D of the road segment 301. By analyzing the images, the jump relation among the images is determined to be from the acquisition point A, the acquisition point B, the acquisition point C and the acquisition point D.
According to the method for processing the images, provided by the embodiment of the application, the direction corresponding to each image cluster is obtained by clustering the acquisition direction of the images, and the jump relation among the images is determined by combining the positions of the acquisition points, so that the images and the road network are not required to be bound, and the street view display cost is reduced.
With continued reference to fig. 4, a flow 400 of another embodiment of a method for processing an image according to the present application is shown. In this embodiment, the image may be a panoramic image. The method may comprise the steps of:
step 401, acquiring a plurality of images acquired at a plurality of acquisition points in a preset road section.
Step 402, determining the acquisition direction of each image.
Step 403, clustering the plurality of images according to the acquisition direction to obtain at least one image cluster.
Step 404, determining a direction corresponding to each image cluster.
The principle of steps 401 to 404 is similar to that of steps 201 to 204 and will not be described here again.
In step 405, the position of each acquisition point is corrected.
During the acquisition of the image, the positioning device may have deviation or drift in positioning the position of the acquisition point. The positioning device can be installed in a camera or in a collection vehicle. The positioning means may be various means for positioning, such as a GPS chip. In this embodiment, the execution body may correct the position of each acquisition point. Specifically, the execution body may determine a line with the same direction as the acquisition direction according to the positions of the acquisition points corresponding to the images in the same acquisition direction, and correct the positions of the acquisition points, where the distance between each acquisition point and the line is greater than a preset value. For example, the position of the collection point may be corrected to a position on the line.
In step 406, in response to determining that the angle between the directions corresponding to the at least two image clusters is greater than a preset threshold, a center point of each acquisition point is determined.
In this embodiment, after the execution subject obtains a plurality of image clusters, the execution subject may determine the angles between the directions corresponding to the image clusters. If the angle is greater than the preset threshold, it is indicated that the preset road segment includes a bifurcation intersection (as shown in fig. 5). The angle between the directions corresponding to the image clusters G1 and G2 in fig. 5 is greater than the preset threshold, which indicates that the bifurcation intersection is included in fig. 5. The preset threshold may be 30 degrees, which is not limited in this embodiment. In this case, the execution subject may first determine the center point of each acquisition point (as point E in fig. 5). The determination of the center point may be determined by the centroid of each image cluster. Specifically, the execution subject may first determine, according to the location of the acquisition point of each image in each image cluster, a centroid corresponding to the image cluster. And further calculating the center point of each centroid to obtain the center point of each acquisition point.
Step 407, determining a directional connecting line between the acquisition points according to the position of the center point, the position of each acquisition point and the acquisition direction of the image corresponding to each acquisition point.
After the execution main body determines the center point, the directional connecting lines among the acquisition points can be determined according to the position of the center point, the position of each acquisition point and the acquisition direction of the image corresponding to each acquisition point. Specifically, the execution body may sequentially connect a plurality of acquisition points corresponding to a single image cluster according to the order of the distances between the acquisition points and the center point from large to small, so as to obtain a directional connection line corresponding to the single image cluster. Directional links may also be referred to herein as the topology of a preset road segment.
Step 408, determining the jump relation between the images according to the directed connecting lines.
Finally, the execution subject may determine a jump relationship between the images based on the directional connection lines. Specifically, the execution subject may determine the front-to-back relationship between the images according to the orientation of the directional connection line. Then, a skip relationship is determined as a skip from the previous image to the next image.
It should be noted that the directional connection line may be bidirectional. Taking fig. 3 as an example, the directional connection line may be a→b→c→d, or may be d→c→b→a. Thus, the image corresponding to the A acquisition point can jump to the image corresponding to the B acquisition point, and the image corresponding to the B acquisition point can also jump to the image corresponding to the A acquisition point. In some specific applications, each image may have a jumping relationship with multiple images. That is, the image corresponding to the A acquisition point can also be directly jumped to the image corresponding to the C acquisition point, and can also be directly jumped to the image corresponding to the D acquisition point.
In step 406', responsive to determining that the angle between the directions corresponding to the at least two image clusters is less than or equal to a preset threshold, directionally connecting the acquisition points according to the positions of the acquisition points.
In this embodiment, if the execution subject determines that the angle between the directions corresponding to the image clusters is less than or equal to the preset threshold, the preset road section is considered to not include the bifurcation intersection, that is, the preset road section is the lane of the unidirectional driving. The collection points can be connected in a directed manner according to the positions of the collection points.
Step 407', determining the jump relation between the images according to the directional connecting lines between the acquisition points.
The principle of step 407' is similar to that of step 408 and will not be described again here.
In some alternative implementations of the present embodiment, the step 402 may be implemented specifically by the following steps not shown in fig. 4: and determining the acquisition direction of the image according to the running direction of the acquisition vehicle when the image is acquired.
In this implementation manner, the execution subject may determine the acquisition direction of the image according to the traveling direction of the acquisition vehicle when the image is acquired. Specifically, the execution subject may take the direction of the lane in which the collection vehicle is located when the image is collected as the collection direction of the image.
In some optional implementations of this embodiment, the image is a panoramic image. The above step 402 may be implemented specifically by the following steps not shown in fig. 4: and determining the acquisition direction of each panoramic image according to the direction of the first frame image in each panoramic image.
In this implementation manner, for each panoramic image, the execution subject may further determine, according to the direction of the first frame image in the panoramic image, the acquisition direction of the panoramic image. Here, the first frame image refers to an image captured earliest by the panoramic camera. The execution subject may determine the acquisition direction of the panoramic image according to the direction indicated by the content included in the first frame image.
In some alternative implementations of the present embodiment, the step 404 may be specifically implemented by the following steps not shown in fig. 4: and determining the direction corresponding to each image cluster according to the direction of each image in the image clusters.
In this implementation manner, the execution subject may determine, according to the direction of each image in the image clusters, the direction corresponding to each image cluster. Specifically, the average value of the directions of the panoramic images can be used as the direction corresponding to each image cluster.
In some alternative implementations of the present embodiment, the execution body may implement step 405 by the following steps not shown in fig. 4: performing straight line fitting on acquisition points corresponding to the images in each image cluster; and correcting the position of each acquisition point according to the position of the projection point of each acquisition point to the fitting straight line.
In this implementation manner, the execution subject may first perform straight line fitting on the acquisition points corresponding to the images in each image cluster. Then, the positions of the respective acquisition points are corrected based on the positions of the respective acquisition points to the projection points of the fitting straight line. For example, the position of the projection point is taken as the corrected position of the acquisition point. Alternatively, the position of the midpoint between the projection point and the acquisition point is used as the corrected position of the acquisition point.
In some optional implementations of this embodiment, the directional connection lines may include a first directional connection line and a second directional connection line. The execution body may also determine the directional connection lines between the acquisition points by the following steps, not shown in fig. 4: according to the position of the center point and the position of each acquisition point, determining the acquisition point related to the center point from the acquisition points corresponding to the images in each image cluster as a related point; determining a first directional connecting line between the corresponding acquisition points of the images in each image cluster according to the acquisition direction of the images in each image cluster and the positions of the corresponding acquisition points; and determining a second directional connecting line between the related points according to the acquisition direction and the position of the image corresponding to the related points.
In this implementation manner, the execution body may first determine, according to the position of the center point and the position of each acquisition point, an acquisition point related to the center point from the acquisition points corresponding to the images in each image cluster, as the related point. Here, the relevant point may be the acquisition point closest to the center point in each image cluster. Then, for each image cluster, the execution subject may determine a first directional connection line between the collection points corresponding to the images in each image cluster according to the collection direction of the images and the positions of the corresponding collection points in each image cluster. Specifically, the execution body may sequentially connect the collection points according to the order from the large distance to the small distance between the collection points and the center point, so as to obtain a first directional connection line between the collection points corresponding to the images in the single image cluster. The execution body may further determine a second directional connection line between the relevant points according to the acquisition direction and the position of the image corresponding to the relevant points. Specifically, the execution body may directly connect any two relevant points, that is, the connection line between any two relevant points is bidirectional. Or, the executing body may determine the second directional connecting line according to the acquisition direction of the image corresponding to the relevant point, where the condition to be satisfied is: the angle between the second directional connecting line and the direction corresponding to the relevant point is smaller than a preset value.
In some optional implementations of this embodiment, when executing the directional connection lines between the collection points corresponding to the images in the image cluster, the main body may determine the following steps not shown in fig. 4: determining the distance between the acquisition points according to the positions of the acquisition points corresponding to the images in each image cluster; and in response to determining that the distance is smaller than a preset threshold, connecting the two acquisition points corresponding to the distance in a directed manner.
In this implementation manner, when determining the directional connection lines between the collection points, the execution subject may also first calculate the distance between the collection points corresponding to the images in the single image cluster. If the distance is smaller than the preset threshold value, the two acquisition points corresponding to the distance can be connected in a directed manner. If the jump relation is larger than or equal to the preset threshold value, the two acquisition points are not related, and the two acquisition points are not connected.
The method for processing the images provided by the embodiment of the application can be used for respectively processing the situations of the bifurcation intersection and the non-bifurcation intersection, so that the jump relationship between the obtained images is more accurate; meanwhile, the positions of all the acquisition points can be corrected, and the accuracy of the image position information is improved.
With further reference to fig. 6, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of an apparatus for processing an image, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus is particularly applicable to various electronic devices.
As shown in fig. 6, the image processing apparatus 600 of the present embodiment includes: an image acquisition unit 601, a first direction determination unit 602, an image clustering unit 603, a second direction determination unit 604, and a jump determination unit 605.
The image acquisition unit 601 is configured to acquire a plurality of images acquired at a plurality of acquisition points within a preset road section.
A first direction determining unit 602 configured to determine a collection direction of each image.
And the image clustering unit 603 is configured to cluster the plurality of images according to the acquisition direction to obtain at least one image cluster.
The second direction determining unit 604 is configured to determine a direction corresponding to each image cluster.
The skip determination unit 605 is configured to determine a skip relation between the images according to the direction in which the clusters of the images correspond and the position of the acquisition points.
In some optional implementations of the present embodiment, the jump determination unit 605 may be further configured to: determining a center point of each acquisition point in response to determining that the angle between the directions corresponding to at least two image clusters is greater than a preset threshold; determining a directional connecting line between the acquisition points according to the positions of the center points, the positions of the acquisition points and the acquisition directions of the images corresponding to the acquisition points; and determining the jump relation between the images according to the directed connecting lines.
In some alternative implementations of the present embodiment, the directional connection lines include a first directional connection line and a second directional connection line. The jump determination unit 605 may be further configured to: according to the position of the center point and the position of each acquisition point, determining the acquisition point related to the center point from the acquisition points corresponding to the images in each image cluster as a related point; determining a first directional connecting line between the corresponding acquisition points of the images in each image cluster according to the acquisition direction of the images in each image cluster and the positions of the corresponding acquisition points; and determining a second directional connecting line between the related points according to the acquisition direction and the position of the image corresponding to the related points.
In some optional implementations of the present embodiment, the jump determination unit 605 may be further configured to: determining the distance between the acquisition points according to the positions of the acquisition points corresponding to the images in each image cluster; and responding to the fact that the determined distance is smaller than a preset threshold value, and connecting the two acquisition points corresponding to the distance in a directed mode.
In some optional implementations of the present embodiment, the jump determination unit 605 may be further configured to: in response to determining that the angle between the directions corresponding to the at least two image clusters is less than or equal to a preset threshold, directionally connecting the acquisition points according to the positions of the acquisition points; and determining the jump relation between the images according to the directional connecting lines between the acquisition points.
In some alternative implementations of the present embodiment, the apparatus 600 may further include a position correction unit, not shown in fig. 6, configured to correct the position of each of the collection points.
In some optional implementations of the present embodiment, the position correction unit may be further configured to: performing straight line fitting on acquisition points corresponding to the images in each image cluster; and correcting the position of each acquisition point according to the position of the projection point of each acquisition point to the fitting straight line.
In some optional implementations of the present embodiment, the first direction determining unit 602 may be further configured to: and determining the acquisition direction of the image according to the running direction of the acquisition vehicle when the image is acquired.
In some optional implementations of the present embodiment, the second direction determining unit 604 may be further configured to: and determining the direction corresponding to each image cluster according to the direction of each image in the image clusters.
In some alternative implementations of the present embodiment, the image includes a panoramic image.
In some optional implementations of the present embodiment, the first direction determining unit 602 may be further configured to: and determining the acquisition direction of each panoramic image according to the direction of the first frame image in each panoramic image.
It should be understood that the units 601 to 605 described in the apparatus 600 for processing an image correspond to the respective steps in the method described with reference to fig. 2. Thus, the operations and features described above with respect to the method for processing an image are equally applicable to the apparatus 600 and the units contained therein, and are not described in detail herein.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 7, is a block diagram of an electronic device that performs a method for processing images according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 7, the electronic device includes: one or more processors 701, memory 702, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 701 is illustrated in fig. 7.
Memory 702 is a non-transitory computer-readable storage medium provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the methods provided herein for processing images. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the methods provided herein for processing images.
The memory 702 is used as a non-transitory computer readable storage medium, and may be used to store a non-transitory software program, a non-transitory computer executable program, and modules, such as program instructions/modules (e.g., the image acquisition unit 601, the first direction determination unit 602, the image clustering unit 603, the second direction determination unit 604, and the jump determination unit 605 shown in fig. 6) corresponding to the method for processing an image in the embodiments of the present application. The processor 701 executes various functional applications of the server and data processing by executing non-transitory software programs, instructions, and modules stored in the memory 702, that is, implements the method for processing images performed in the above-described method embodiment.
Memory 702 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device executing the processing for the image, and the like. In addition, the memory 702 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 702 may optionally include memory located remotely from processor 701, which may be connected to the electronic device executing the processing of the image via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device performing the method for processing an image may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or otherwise, in fig. 7 by way of example.
The input device 703 may receive input numeric or character information and generate key signal inputs related to performing user settings and function controls of the electronic device for processing images, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, and the like. The output device 704 may include a display apparatus, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibration motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the direction corresponding to each image cluster is obtained by clustering the acquisition directions of the images, and the jump relation among the images is determined by combining the positions of the acquisition points, so that the images and the road network are not required to be bound, and the street view display cost is reduced.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (20)

1. A method for processing an image, comprising:
acquiring a plurality of images acquired by a plurality of acquisition points in a preset road section;
determining the acquisition direction of each image;
clustering the plurality of images according to the acquisition direction to obtain at least one image cluster;
determining the corresponding direction of each image cluster;
determining a jump relation between images according to the direction corresponding to each image cluster and the position of each acquisition point, wherein the jump relation comprises the following steps: determining a center point of each acquisition point in response to determining that the angle between the directions corresponding to at least two image clusters is greater than a preset threshold; determining a directional connecting line between the acquisition points according to the position of the center point, the position of each acquisition point and the acquisition direction of the image corresponding to each acquisition point; determining a jump relation among the images according to the directed connecting lines;
in response to determining that the angle between the directions corresponding to at least two image clusters is less than or equal to the preset threshold, directionally connecting the acquisition points according to the positions of the acquisition points; and determining the jump relation between the images according to the directional connecting lines between the acquisition points.
2. The method of claim 1, wherein the directional connection lines comprise a first directional connection line and a second directional connection line; and
the determining the directional connecting line between the collection points according to the position of the center point, the position of each collection point and the collection direction of the image corresponding to each collection point comprises the following steps:
according to the position of the center point and the position of each acquisition point, determining the acquisition point related to the center point from each acquisition point corresponding to each image in each image cluster as a related point;
determining a first directional connecting line between the corresponding acquisition points of the images in each image cluster according to the acquisition direction of the images in each image cluster and the positions of the corresponding acquisition points;
and determining a second directional connecting line between the related points according to the acquisition direction and the position of the image corresponding to the related points.
3. The method of claim 2, wherein determining the directional connection lines between the corresponding collection points of the images in each image cluster according to the collection direction of the images and the positions of the corresponding collection points in the image cluster comprises:
determining the distance between the acquisition points according to the positions of the acquisition points corresponding to the images in each image cluster;
and in response to determining that the distance is smaller than a preset threshold, connecting the two acquisition points corresponding to the distance in a directed manner.
4. The method of claim 1, wherein the method further comprises:
correcting the position of each acquisition point.
5. The method of claim 4, wherein correcting the location of each acquisition point comprises:
performing straight line fitting on acquisition points corresponding to the images in each image cluster;
and correcting the position of each acquisition point according to the position of the projection point of each acquisition point to the fitting straight line.
6. The method of claim 1, wherein the determining the acquisition direction of each image comprises:
and determining the acquisition direction of the image according to the running direction of the acquisition vehicle when the image is acquired.
7. The method of claim 1, wherein the determining a direction to which each image cluster corresponds comprises:
and determining the direction corresponding to each image cluster according to the direction of each image in the image clusters.
8. The method of any of claims 1-7, wherein the image comprises a panoramic image.
9. The method of claim 8, wherein the determining the acquisition direction of each image comprises:
and determining the acquisition direction of each panoramic image according to the direction of the first frame image in each panoramic image.
10. An apparatus for processing an image, comprising:
an image acquisition unit configured to acquire a plurality of images acquired at a plurality of acquisition points within a preset road section;
a first direction determining unit configured to determine a collection direction of each image;
the image clustering unit is configured to cluster the plurality of images according to the acquisition direction to obtain at least one image cluster;
a second direction determining unit configured to determine a direction to which each image cluster corresponds;
a jump determining unit configured to determine a jump relationship between the images according to a direction corresponding to the clusters of the images and a position of the acquisition points;
wherein the jump determination unit is further configured to:
determining a center point of each acquisition point in response to determining that the angle between the directions corresponding to at least two image clusters is greater than a preset threshold;
determining a directional connecting line between the acquisition points according to the position of the center point, the position of each acquisition point and the acquisition direction of the image corresponding to each acquisition point;
determining a jump relation among the images according to the directed connecting lines;
in response to determining that the angle between the directions corresponding to at least two image clusters is less than or equal to the preset threshold, directionally connecting the acquisition points according to the positions of the acquisition points;
and determining the jump relation between the images according to the directional connecting lines between the acquisition points.
11. The apparatus of claim 10, wherein the directional connection lines comprise a first directional connection line and a second directional connection line; and
the jump determination unit is further configured to:
according to the position of the center point and the position of each acquisition point, determining the acquisition point related to the center point from each acquisition point corresponding to each image in each image cluster as a related point;
determining a first directional connecting line between the corresponding acquisition points of the images in each image cluster according to the acquisition direction of the images in each image cluster and the positions of the corresponding acquisition points;
and determining a second directional connecting line between the related points according to the acquisition direction and the position of the image corresponding to the related points.
12. The apparatus of claim 11, wherein the jump determination unit is further configured to:
determining the distance between the acquisition points according to the positions of the acquisition points corresponding to the images in each image cluster;
and in response to determining that the distance is smaller than a preset threshold, connecting the two acquisition points corresponding to the distance in a directed manner.
13. The apparatus of claim 10, wherein the apparatus further comprises:
and a position correction unit configured to correct the position of each of the acquisition points.
14. The apparatus of claim 13, wherein the position correction unit is further configured to:
performing straight line fitting on acquisition points corresponding to the images in each image cluster;
and correcting the position of each acquisition point according to the position of the projection point of each acquisition point to the fitting straight line.
15. The apparatus of claim 10, wherein the first direction determination unit is further configured to:
and determining the acquisition direction of the image according to the running direction of the acquisition vehicle when the image is acquired.
16. The apparatus of claim 10, wherein the second direction determination unit is further configured to:
and determining the direction corresponding to each image cluster according to the direction of each image in the image clusters.
17. The apparatus of any of claims 10-16, wherein the image comprises a panoramic image.
18. The apparatus of claim 17, wherein the first direction determination unit is further configured to:
and determining the acquisition direction of each panoramic image according to the direction of the first frame image in each panoramic image.
19. An electronic device for processing an image, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-9.
CN202011092196.3A 2020-10-13 2020-10-13 Method, apparatus, device and storage medium for processing image Active CN112214624B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011092196.3A CN112214624B (en) 2020-10-13 2020-10-13 Method, apparatus, device and storage medium for processing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011092196.3A CN112214624B (en) 2020-10-13 2020-10-13 Method, apparatus, device and storage medium for processing image

Publications (2)

Publication Number Publication Date
CN112214624A CN112214624A (en) 2021-01-12
CN112214624B true CN112214624B (en) 2024-04-12

Family

ID=74053982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011092196.3A Active CN112214624B (en) 2020-10-13 2020-10-13 Method, apparatus, device and storage medium for processing image

Country Status (1)

Country Link
CN (1) CN112214624B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060023685A (en) * 2004-09-10 2006-03-15 삼성전자주식회사 The method of designing clustered-dot screen based on the human visual characteristics and the printer model, the device thereof and the image forming device outputing binary image using the designed screen
CN106649777A (en) * 2016-12-27 2017-05-10 中科宇图科技股份有限公司 Method for constructing topological relation of intersection in panoramic vector data
CN108036794A (en) * 2017-11-24 2018-05-15 华域汽车系统股份有限公司 A kind of high accuracy map generation system and generation method
CN111626206A (en) * 2020-05-27 2020-09-04 北京百度网讯科技有限公司 High-precision map construction method and device, electronic equipment and computer storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8581900B2 (en) * 2009-06-10 2013-11-12 Microsoft Corporation Computing transitions between captured driving runs

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060023685A (en) * 2004-09-10 2006-03-15 삼성전자주식회사 The method of designing clustered-dot screen based on the human visual characteristics and the printer model, the device thereof and the image forming device outputing binary image using the designed screen
CN106649777A (en) * 2016-12-27 2017-05-10 中科宇图科技股份有限公司 Method for constructing topological relation of intersection in panoramic vector data
CN108036794A (en) * 2017-11-24 2018-05-15 华域汽车系统股份有限公司 A kind of high accuracy map generation system and generation method
CN111626206A (en) * 2020-05-27 2020-09-04 北京百度网讯科技有限公司 High-precision map construction method and device, electronic equipment and computer storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于智能视觉的高空图像搜集系统的设计与实现;张飞;李景富;;计算机测量与控制;20150325(03);全文 *

Also Published As

Publication number Publication date
CN112214624A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN110118554B (en) SLAM method, apparatus, storage medium and device based on visual inertia
CN111797187B (en) Map data updating method and device, electronic equipment and storage medium
CN111649739B (en) Positioning method and device, automatic driving vehicle, electronic equipment and storage medium
CN110246182B (en) Vision-based global map positioning method and device, storage medium and equipment
CN111723768B (en) Method, device, equipment and storage medium for vehicle re-identification
CN111612852B (en) Method and apparatus for verifying camera parameters
US20220270289A1 (en) Method and apparatus for detecting vehicle pose
KR102502651B1 (en) Method and device for generating maps
KR20210036317A (en) Mobile edge computing based visual positioning method and device
CN111553844B (en) Method and device for updating point cloud
CN112132113A (en) Vehicle re-identification method and device, training method and electronic equipment
JP2015507860A (en) Guide to image capture
CN111767853B (en) Lane line detection method and device
CN111079079B (en) Data correction method, device, electronic equipment and computer readable storage medium
US20210406548A1 (en) Method, apparatus, device and storage medium for processing image
KR20210154781A (en) Method for acquiring traffic status and apparatus thereof, roadside device, and cloud control platform
KR20210093194A (en) A method, an apparatus an electronic device, a storage device, a roadside instrument, a cloud control platform and a program product for detecting vehicle's lane changing
CN112102417B (en) Method and device for determining world coordinates
CN111523471A (en) Method, device and equipment for determining lane where vehicle is located and storage medium
CN111949816B (en) Positioning processing method, device, electronic equipment and storage medium
CN113992860B (en) Behavior recognition method and device based on cloud edge cooperation, electronic equipment and medium
CN111627241A (en) Method and device for generating vehicle queuing information
CN111652112A (en) Lane flow direction identification method and device, electronic equipment and storage medium
CN114627268A (en) Visual map updating method and device, electronic equipment and medium
CN111612851B (en) Method, apparatus, device and storage medium for calibrating camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant