CN112214624A - Method, apparatus, device and storage medium for processing image - Google Patents

Method, apparatus, device and storage medium for processing image Download PDF

Info

Publication number
CN112214624A
CN112214624A CN202011092196.3A CN202011092196A CN112214624A CN 112214624 A CN112214624 A CN 112214624A CN 202011092196 A CN202011092196 A CN 202011092196A CN 112214624 A CN112214624 A CN 112214624A
Authority
CN
China
Prior art keywords
image
acquisition
determining
images
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011092196.3A
Other languages
Chinese (zh)
Inventor
白国财
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011092196.3A priority Critical patent/CN112214624A/en
Publication of CN112214624A publication Critical patent/CN112214624A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location

Abstract

The application discloses a method, a device, equipment and a storage medium for processing images, and relates to the field of computer vision. The specific implementation scheme is as follows: acquiring a plurality of images acquired by a plurality of acquisition points in a preset road section; determining the collection direction of each image; clustering a plurality of images according to the acquisition direction to obtain at least one image cluster; determining the corresponding direction of each image cluster; and determining the jump relation among the images according to the corresponding direction of each image cluster and the position of each acquisition point. The realization mode does not need to bind the image with the road network, realizes the showing of the street view, simplifies the operation and reduces the manufacturing cost.

Description

Method, apparatus, device and storage medium for processing image
Technical Field
The present application relates to the field of computer technology, and in particular, to the field of computer vision, and more particularly, to a method, apparatus, device, and storage medium for processing an image.
Background
Along with the continuous development of artificial intelligence in recent years, computer vision technology is more and more widely applied. The demands of many applications have shown limitations in the range of normal lens angles. Therefore, the wide-angle image and the panoramic image have super-large visual angles, and can obtain more scene information at one time, so that the wide-angle image and the panoramic image are widely applied to the fields of security monitoring, industrial medical treatment, intelligent transportation and the like.
The wide-angle image or the panoramic image is applied to the internet street view service, so that the experience feeling consistent with the actual street view road condition during browsing can be realized. The traditional method is to bind the image and the road, but the collection and the manufacture of the road network need higher cost, so that the binding and the manufacture of the image and the road are complex, and the consumption cost is high.
Disclosure of Invention
A method, apparatus, device, and storage medium for processing an image are provided.
According to a first aspect, there is provided a method for processing an image, comprising: acquiring a plurality of images acquired by a plurality of acquisition points in a preset road section; determining the collection direction of each image; clustering the plurality of images according to the acquisition direction to obtain at least one image cluster; determining the corresponding direction of each image cluster; and determining the jump relation among the images according to the corresponding direction of each image cluster and the position of each acquisition point.
According to a second aspect, there is provided an apparatus for processing an image, comprising: an image acquisition unit configured to acquire a plurality of images acquired at a plurality of acquisition points within a preset road section; a first direction determination unit configured to determine an acquisition direction of each image; the image clustering unit is configured to cluster the plurality of images according to the acquisition direction to obtain at least one image cluster; a second direction determination unit configured to determine a direction corresponding to each image cluster; and the jump determining unit is configured to determine a jump relation between the images according to the corresponding direction of each image cluster and the positions of the acquisition points.
According to a third aspect, there is provided an electronic device for processing an image, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in the first aspect.
According to a fourth aspect, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method as described in the first aspect.
According to the technology of the application, the technical problems that the image and the road network need to be bound and the manufacturing is complex in the existing street view display method are solved, the image and the road network do not need to be bound, the street view display is achieved, the operation is simplified, and the manufacturing cost is reduced.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for processing an image according to the present application;
FIG. 3 is a schematic illustration of an application scenario of a method for processing an image according to the present application;
FIG. 4 is a flow diagram of another embodiment of a method for processing an image according to the present application;
FIG. 5 is a schematic diagram of a bifurcated intersection according to the embodiment shown in FIG. 4;
FIG. 6 is a schematic block diagram of one embodiment of an apparatus for processing images according to the present application;
fig. 7 is a block diagram of an electronic device for implementing a method for processing an image according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for processing images or the apparatus for processing images of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include a camera 101, a network 102, and a terminal device 103. The network 102 is used to provide the medium of a communication link between the camera 101 and the terminal device 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The camera 101 may be various image pickup devices capable of taking a wide-angle image or a panoramic image, and may transmit the picked-up wide-angle image or panoramic image to the terminal device 103 through the network 102 to cause the terminal device 103 to process the wide-angle image or panoramic image. The camera 101 may include a smartphone, a smart camera, or the like.
The terminal device 103 may be various electronic devices capable of processing images. Various client applications can be installed therein, such as an image processing application, a social platform application, and the like
The terminal device 103 may be hardware or software. When the terminal device 103 is hardware, it may be various electronic devices including, but not limited to, a tablet computer, a car computer, a laptop portable computer, a desktop computer, and the like. When the terminal device 103 is software, it can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for processing an image provided in the embodiment of the present application is generally executed by the terminal device 103. Accordingly, a device for processing an image is generally provided in the terminal apparatus 103.
It should be understood that the number of cameras, networks and terminal devices in fig. 1 is merely illustrative. There may be any number of cameras, networks, and terminal devices, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for processing an image according to the present application is shown. The method for processing the image of the embodiment comprises the following steps:
step 201, acquiring a plurality of images acquired at a plurality of acquisition points in a preset road section.
In this embodiment, an execution subject (for example, the terminal device 103 shown in fig. 1) of the method for processing an image may acquire a plurality of images acquired at a plurality of acquisition points in a preset road segment by wired connection or wireless connection. The image may be a wide-angle image or a panoramic image. Here, the wide-angle image may be an image having an angle of view of 180 degrees or even up to 220 degrees, and the panoramic image may be an image having an angle of view of 360 degrees. The preset road section may be a road section with a preset length or a road section with a preset position. An acquisition point may be understood as a location point for acquiring an image. The positions of the plurality of acquisition points can be preset or determined according to the actual situation of the road section. The images may be acquired by a camera at a plurality of acquisition points. The camera can be installed on the collection vehicle, and images are collected continuously in the running process of the collection vehicle. The camera can also be held by the user and continuously acquires images according to the walking route of the user.
Step 202, determining the collection direction of each image.
The execution subject may also determine the acquisition direction of each image. Specifically, if the image is a panoramic image, the execution subject may take a direction in which an image with the earliest shooting time in the panoramic image points as the capturing direction of the image. Alternatively, the execution body may determine the capturing direction of each image according to the traveling direction of the vehicle in which the camera is installed. Alternatively, the execution body may determine the direction at the time of image capturing by a sensor (e.g., a gyroscope or the like) mounted in the camera, and take this direction as the acquisition direction.
And 203, clustering the plurality of images according to the acquisition direction to obtain at least one image cluster.
After the acquisition direction of each image is determined, the execution subject can cluster a plurality of images to obtain at least one image cluster. Specifically, the executing agent may cluster the images using a variety of existing clustering algorithms. It is to be understood that at least one image may be included in each image cluster. The images with the same acquisition direction are located in the same image cluster.
And step 204, determining the corresponding direction of each image cluster.
The executive may also determine the direction to which each image cluster corresponds. Specifically, for each image cluster, the executing entity may average the directions of the plurality of images included in the image cluster, and the obtained value is used as the direction corresponding to the image cluster. Alternatively, the executing agent may take the direction of any one image as the direction corresponding to the image cluster.
And step 205, determining the jump relation among the images according to the corresponding direction of each image cluster and the position of each acquisition point.
After determining the direction corresponding to each image cluster, the execution main body can determine the jump relation between the images by combining the positions of the acquisition points. Specifically, the execution subject may first determine the jump relationship between the images in the same image cluster, and then determine the jump relationship between the images in the image cluster. For example, the executing subject may determine that there is a skip relationship between images corresponding to two acquisition points in the same image cluster, which have the same acquisition direction and are closest to each other. Then, the distance between each acquisition point in the image cluster and the acquisition points with different acquisition directions is judged, and a plurality of acquisition points which are closest in distance and different in corresponding image acquisition directions are determined. These acquisition points may be located in different image clusters. And finally, determining the jump relation among the images corresponding to the acquisition points. The jump relationship here may be understood as a jump from a previous panoramic image to a subsequent panoramic image, wherein the previous image includes an image at the acquisition point of the subsequent image. For example, the former image includes a location a, and the latter image is an image acquired at the location a. If the user clicks on location a in the previous image, a jump can be made to the next image.
Referring to fig. 3, there is shown a schematic diagram of one application scenario of a method for processing an image according to the present application. In the application scenario shown in fig. 3, the terminal device acquires a plurality of images acquired at different acquisition points A, B, C, D of the road segment 301. Through analysis of the multiple images, the jumping relation between the images is determined to be from acquisition point A → acquisition point B → acquisition point C → acquisition point D.
According to the method for processing the images, the collection directions of the images are clustered to obtain the direction corresponding to each image cluster, and the skip relation between the images is determined by combining the positions of the collection points, so that the images do not need to be bound with a road network, and the cost of street view display is reduced.
With continued reference to FIG. 4, a flow 400 of another embodiment of a method for processing an image according to the present application is shown. In this embodiment, the image may be a panoramic image. The above method may comprise the steps of:
step 401, acquiring a plurality of images acquired at a plurality of acquisition points in a preset road section.
Step 402, determining the acquisition direction of each image.
And 403, clustering the plurality of images according to the acquisition direction to obtain at least one image cluster.
Step 404, determining a corresponding direction of each image cluster.
The principle of steps 401 to 404 is similar to that of steps 201 to 204, and is not described herein again.
Step 405, the position of each acquisition point is corrected.
During the image acquisition process, the positioning device may have deviation or drift on the positioning of the acquisition point position. The positioning device can be installed in a camera or an acquisition vehicle. The positioning device may be various devices for positioning, such as a GPS chip. In this embodiment, the execution body may further correct the position of each acquisition point. Specifically, the execution main body can determine a line in the same direction as the acquisition direction according to the position of each acquisition point corresponding to the image in the same acquisition direction, and correct the position of the acquisition point, of which the distance between each acquisition point and the line is greater than a preset value. For example, the position of the acquisition point may be corrected to a position on the line.
Step 406, in response to determining that the angle between the directions corresponding to the at least two image clusters is greater than a preset threshold, determining a center point of each acquisition point.
In this embodiment, after the execution subject obtains a plurality of image clusters, the execution subject may determine an angle between directions corresponding to the image clusters. If the angle is larger than the preset threshold, it indicates that the preset road section includes a bifurcation intersection (as shown in fig. 5). The angle between the directions corresponding to the image clusters G1 and G2 in fig. 5 is greater than a preset threshold, which illustrates that fig. 5 includes a bifurcation junction. The preset threshold may be 30 degrees, which is not limited in this embodiment. In this case, the executing entity may first determine the center point of each acquisition point (e.g., point E in fig. 5). The determination of the center point may be determined by the centroid of each image cluster. Specifically, the executing subject may first determine a centroid corresponding to each image cluster according to the position of the capture point of each image in the image cluster. And further calculating the center point of each mass center to obtain the center point of each acquisition point.
Step 407, determining a directional connecting line between the collection points according to the position of the central point, the positions of the collection points and the collection directions of the images corresponding to the collection points.
After the execution main body determines the central point, the directional connecting line between the acquisition points can be determined according to the position of the central point, the positions of the acquisition points and the acquisition directions of the images corresponding to the acquisition points. Specifically, the execution main body may sequentially connect a plurality of acquisition points corresponding to a single image cluster according to a descending order of distances between the acquisition points and the central point, so as to obtain the directional connection line corresponding to the single image cluster. The directed connection lines may also be referred to herein as a topology of preset road segments.
And step 408, determining the jump relation among the images according to the directional connecting lines.
Finally, the execution subject may determine a jump relationship between the images according to the directional connection lines. Specifically, the execution subject may determine the context between the images according to the direction of the directional connection line. Then, the jump relation is determined to be a jump from the previous image to the next image.
It should be noted that the directional connection lines may be bidirectional. For example, in fig. 3, the directional connection line may be a → B → C → D, or D → C → B → a. Therefore, the image corresponding to the acquisition point A can jump to the image corresponding to the acquisition point B, and the image corresponding to the acquisition point B can also jump to the image corresponding to the acquisition point A. In some specific applications, each image may have a jump relationship with multiple images. Namely, the image corresponding to the acquisition point A can be directly jumped to the image corresponding to the acquisition point C, and can also be directly jumped to the image corresponding to the acquisition point D.
Step 406', in response to determining that the angle between the directions corresponding to the at least two image clusters is less than or equal to a preset threshold, directionally connecting the acquisition points according to the positions of the acquisition points.
In this embodiment, if the execution subject determines that the angle between the directions corresponding to the image clusters is less than or equal to the preset threshold, the preset road segment is considered not to include a bifurcation intersection, that is, the preset road segment is a lane for one-way driving. Then, the acquisition points can be directionally connected according to the positions of the acquisition points.
And 407', determining the jump relation among the images according to the directed connecting lines among the acquisition points.
The principle of step 407' is similar to that of step 408, and is not described here again.
In some optional implementations of this embodiment, the step 402 may be specifically implemented by the following steps not shown in fig. 4: and determining the acquisition direction of the image according to the driving direction of the acquired vehicle when the image is acquired.
In this implementation, the execution subject may determine the collection direction of the image according to the traveling direction of the collection vehicle when the image is collected. Specifically, the executing subject may use a direction of a lane where the vehicle is located when the image is captured as the capturing direction of the image.
In some optional implementations of the present embodiment, the image is a panoramic image. The step 402 can be implemented by the following steps not shown in fig. 4: and determining the acquisition direction of each panoramic image according to the direction of the first frame image in each panoramic image.
In this implementation, for each panoramic image, the execution subject may further determine the acquisition direction of the panoramic image according to the direction of the first frame image in the panoramic image. Here, the first frame image is an image captured by the panoramic camera at the earliest time. The execution subject may determine the capturing direction of the panoramic image according to the direction indicated by the content included in the first frame image.
In some optional implementations of this embodiment, the step 404 may be specifically implemented by the following steps not shown in fig. 4: and determining the direction corresponding to each image cluster according to the direction of each image in the image clusters.
In this implementation, the execution subject may determine a direction corresponding to each image cluster according to the direction of each image in the image clusters. Specifically, the average of the directions of the panoramic images may be used as the direction corresponding to each image cluster.
In some optional implementations of this embodiment, the executing agent may implement step 405 by the following steps not shown in fig. 4: performing straight line fitting on acquisition points corresponding to the images in each image cluster; and correcting the position of each acquisition point according to the position of the projection point of each acquisition point to the fitting straight line.
In this implementation, the execution subject may first perform straight line fitting on the acquisition points corresponding to each image in each image cluster. And then correcting the position of each acquisition point according to the position of the projection point of each acquisition point to the fitting straight line. For example, the position of the projection point is regarded as the corrected position of the acquisition point. Or taking the positions of the projection point and the midpoint of the acquisition point as the corrected positions of the acquisition point.
In some optional implementations of the present embodiment, the directional connection lines may include a first directional connection line and a second directional connection line. The execution body may also determine the directional connection lines between the acquisition points by the following steps not shown in fig. 4: according to the position of the central point and the positions of the acquisition points, determining the acquisition points related to the central point from the acquisition points corresponding to the images in each image cluster as related points; determining a first directional connecting line between the acquisition points corresponding to the images in each image cluster according to the acquisition direction of each image in each image cluster and the positions of the corresponding acquisition points; and determining a second directional connecting line between the relevant points according to the acquisition direction and the position of the image corresponding to the relevant points.
In this implementation, the execution subject may determine, as the relevant point, an acquisition point relevant to the central point from the acquisition points corresponding to the images in each image cluster according to the position of the central point and the positions of the acquisition points. Here, the relevant point may be an acquisition point closest to the central point in each image cluster. Then, for each image cluster, the execution subject may determine, according to the collecting direction of each image in each image cluster and the position of each corresponding collecting point, a first directional connecting line between each collecting point corresponding to the image in the image cluster. Specifically, the execution main body may sequentially connect the collection points according to the descending order of the distance between each collection point and the central point to obtain a first directional connection line between the collection points corresponding to the images in the single image cluster. The execution main body can also determine a second directional connecting line between the relevant points according to the acquisition direction and the position of the image corresponding to the relevant points. In particular, the execution body may directly connect any two related points, i.e. a connection line between any two related points is bidirectional. Or, the executing main body may determine the second directional connection line according to the collecting direction of the image corresponding to the relevant point, where the condition to be satisfied is: and the angle between the second directional connecting line and the direction corresponding to the related point is smaller than a preset value.
In some optional implementations of this embodiment, when the directional connection line between the acquisition points corresponding to the images in the image cluster is executed, the directional connection line may be determined by the following steps not shown in fig. 4: determining the distance between the acquisition points according to the positions of the acquisition points corresponding to the images in each image cluster; and responding to the fact that the determined distance is smaller than the preset threshold value, and connecting two acquisition points corresponding to the distance in a directed mode.
In this implementation, when determining the directional connecting line between the acquisition points, the execution main body may further first calculate a distance between the acquisition points corresponding to each image in the single image cluster. If the distance is smaller than the preset threshold value, there may be two acquisition points corresponding to the distance. And if the value is larger than or equal to the preset threshold value, the two acquisition points are considered to be irrelevant, and the two acquisition points are not connected without a jump relation.
The method for processing the images, provided by the embodiment of the application, can be used for respectively processing the situations of the intersection and the non-intersection, so that the jump relation between the obtained images is more accurate; meanwhile, the positions of the acquisition points can be corrected, and the accuracy of the image position information is improved.
With further reference to fig. 6, as an implementation of the methods shown in the above figures, the present application provides an embodiment of an apparatus for processing an image, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 6, the apparatus 600 for processing an image of the present embodiment includes: an image acquisition unit 601, a first direction determination unit 602, an image clustering unit 603, a second direction determination unit 604, and a jump determination unit 605.
The image acquisition unit 601 is configured to acquire a plurality of images acquired at a plurality of acquisition points within a preset road segment.
A first direction determination unit 602 configured to determine an acquisition direction of each image.
An image clustering unit 603 configured to cluster the plurality of images according to the collecting direction, so as to obtain at least one image cluster.
A second direction determining unit 604 configured to determine a direction corresponding to each image cluster.
A jump determining unit 605 configured to determine a jump relationship between the images according to the direction corresponding to each image cluster and the position of each acquisition point.
In some optional implementations of this embodiment, the jump determination unit 605 may be further configured to: determining a central point of each acquisition point in response to determining that an angle between directions corresponding to at least two image clusters is greater than a preset threshold; determining a directional connecting line between the acquisition points according to the position of the central point, the positions of the acquisition points and the acquisition directions of the images corresponding to the acquisition points; and determining the jump relation among the images according to the directional connecting line.
In some optional implementations of the present embodiment, the directional connection lines include a first directional connection line and a second directional connection line. The jump determination unit 605 may be further configured to: according to the position of the central point and the positions of the acquisition points, determining the acquisition points related to the central point from the acquisition points corresponding to the images in each image cluster as related points; determining a first directional connecting line between the acquisition points corresponding to the images in each image cluster according to the acquisition direction of each image in each image cluster and the positions of the corresponding acquisition points; and determining a second directional connecting line between the relevant points according to the acquisition direction and the position of the image corresponding to the relevant points.
In some optional implementations of this embodiment, the jump determination unit 605 may be further configured to: determining the distance between the acquisition points according to the positions of the acquisition points corresponding to the images in each image cluster; and responding to the fact that the distance is smaller than the preset threshold value, and directionally connecting two acquisition points corresponding to the distance.
In some optional implementations of this embodiment, the jump determination unit 605 may be further configured to: in response to the fact that the angle between the directions corresponding to at least two image clusters is smaller than or equal to a preset threshold value, directionally connecting each acquisition point according to the position of each acquisition point; and determining the jump relation among the images according to the directional connecting lines among the acquisition points.
In some optional implementations of this embodiment, the apparatus 600 may further include a position rectification unit, not shown in fig. 6, configured to rectify the position of each acquisition point.
In some optional implementations of this embodiment, the position correction unit may be further configured to: performing straight line fitting on acquisition points corresponding to the images in each image cluster; and correcting the position of each acquisition point according to the position of the projection point of each acquisition point to the fitting straight line.
In some optional implementations of this embodiment, the first direction determining unit 602 may be further configured to: and determining the acquisition direction of the image according to the driving direction of the acquired vehicle when the image is acquired.
In some optional implementations of this embodiment, the second direction determining unit 604 may be further configured to: and determining the direction corresponding to each image cluster according to the direction of each image in the image clusters.
In some optional implementations of the present embodiment, the image comprises a panoramic image.
In some optional implementations of this embodiment, the first direction determining unit 602 may be further configured to: and determining the acquisition direction of each panoramic image according to the direction of the first frame image in each panoramic image.
It should be understood that units 601 to 605 recited in the apparatus 600 for processing an image correspond to respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above for the method for processing an image are equally applicable to the apparatus 600 and the units included therein and will not be described in detail here.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 7, is a block diagram of an electronic device performing a method for processing an image according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the methods provided herein for processing images. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the methods provided herein for processing images.
The memory 702, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the method for processing an image (e.g., the image acquisition unit 601, the first direction determination unit 602, the image clustering unit 603, the second direction determination unit 604, and the jump determination unit 605 shown in fig. 6) in the embodiments of the present application. The processor 701 executes various functional applications of the server and data processing, i.e., implements the method for processing an image performed in the above-described method embodiment, by executing the non-transitory software program, instructions, and modules stored in the memory 702.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of an electronic device that performs processing for an image, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 702 may optionally include memory located remotely from the processor 701, which may be connected via a network to an electronic device executing instructions for processing images. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device performing the method for processing an image may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 703 may receive input numeric or character information and generate key signal inputs related to performing user settings and function control of an electronic apparatus for processing an image, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the collection directions of the images are clustered to obtain the direction corresponding to each image cluster, and the skip relation between the images is determined by combining the positions of the collection points, so that the images do not need to be bound with a road network, and the cost of street view display is reduced.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (24)

1. A method for processing an image, comprising:
acquiring a plurality of images acquired by a plurality of acquisition points in a preset road section;
determining the collection direction of each image;
clustering the plurality of images according to the acquisition direction to obtain at least one image cluster;
determining the corresponding direction of each image cluster;
and determining the jump relation among the images according to the corresponding direction of each image cluster and the position of each acquisition point.
2. The method according to claim 1, wherein the determining a jump relationship between the images according to the corresponding direction of each image cluster and the position of each acquisition point comprises:
determining a central point of each acquisition point in response to determining that an angle between directions corresponding to at least two image clusters is greater than a preset threshold;
determining a directional connecting line between the acquisition points according to the position of the central point, the positions of the acquisition points and the acquisition directions of the images corresponding to the acquisition points;
and determining the jump relation among the images according to the directional connecting line.
3. The method of claim 2, wherein the directional connection lines include a first directional connection line and a second directional connection line; and
the determining of the directional connecting line between the acquisition points according to the position of the central point, the positions of the acquisition points and the acquisition directions of the images corresponding to the acquisition points comprises the following steps:
according to the position of the central point and the positions of the acquisition points, determining the acquisition points related to the central point as related points from the acquisition points corresponding to the images in each image cluster;
determining a first directional connecting line between the acquisition points corresponding to the images in each image cluster according to the acquisition direction of each image in each image cluster and the positions of the corresponding acquisition points;
and determining a second directional connecting line between the relevant points according to the acquisition direction and the position of the image corresponding to the relevant points.
4. The method according to claim 3, wherein the determining the directional connecting lines between the corresponding acquisition points of the images in each image cluster according to the acquisition directions of the images in the image cluster and the positions of the corresponding acquisition points comprises:
determining the distance between the acquisition points according to the positions of the acquisition points corresponding to the images in each image cluster;
and responding to the fact that the distance is smaller than a preset threshold value, and directionally connecting two acquisition points corresponding to the distance.
5. The method according to claim 2, wherein the determining a jump relationship between the images according to the corresponding direction of each image cluster and the position of each acquisition point comprises:
in response to the fact that the angle between the directions corresponding to at least two image clusters is smaller than or equal to the preset threshold value, directionally connecting each acquisition point according to the position of each acquisition point;
and determining the jump relation among the images according to the directional connecting lines among the acquisition points.
6. The method of claim 1, wherein the method further comprises:
the position of each acquisition point is corrected.
7. The method of claim 6, wherein said rectifying the position of each acquisition point comprises:
performing straight line fitting on acquisition points corresponding to the images in each image cluster;
and correcting the position of each acquisition point according to the position of the projection point of each acquisition point to the fitting straight line.
8. The method of claim 1, wherein the determining an acquisition direction of each image comprises:
and determining the acquisition direction of the image according to the driving direction of the acquired vehicle when the image is acquired.
9. The method of claim 1, wherein the determining the direction to which each image cluster corresponds comprises:
and determining the direction corresponding to each image cluster according to the direction of each image in the image clusters.
10. The method of any of claims 1-9, wherein the image comprises a panoramic image.
11. The method of claim 1, wherein the determining an acquisition direction of each image comprises:
and determining the acquisition direction of each panoramic image according to the direction of the first frame image in each panoramic image.
12. An apparatus for processing an image, comprising:
an image acquisition unit configured to acquire a plurality of images acquired at a plurality of acquisition points within a preset road section;
a first direction determination unit configured to determine an acquisition direction of each image;
the image clustering unit is configured to cluster the plurality of images according to the acquisition direction to obtain at least one image cluster;
a second direction determination unit configured to determine a direction corresponding to each image cluster;
and the jump determining unit is configured to determine a jump relation between the images according to the corresponding direction of each image cluster and the positions of the acquisition points.
13. The apparatus of claim 12, wherein,
the jump determination unit is further configured to:
determining a central point of each acquisition point in response to determining that an angle between directions corresponding to at least two image clusters is greater than a preset threshold;
determining a directional connecting line between the acquisition points according to the position of the central point, the positions of the acquisition points and the acquisition directions of the images corresponding to the acquisition points;
and determining the jump relation among the images according to the directional connecting line.
14. The apparatus of claim 13, wherein the directional connection lines comprise a first directional connection line and a second directional connection line; and
the jump determination unit is further configured to:
according to the position of the central point and the positions of the acquisition points, determining the acquisition points related to the central point as related points from the acquisition points corresponding to the images in each image cluster;
determining a first directional connecting line between the acquisition points corresponding to the images in each image cluster according to the acquisition direction of each image in each image cluster and the positions of the corresponding acquisition points;
and determining a second directional connecting line between the relevant points according to the acquisition direction and the position of the image corresponding to the relevant points.
15. The apparatus of claim 14, wherein the hop determination unit is further configured to:
determining the distance between the acquisition points according to the positions of the acquisition points corresponding to the images in each image cluster;
and responding to the fact that the distance is smaller than a preset threshold value, and directionally connecting two acquisition points corresponding to the distance.
16. The apparatus of claim 13, wherein the hop determination unit is further configured to:
in response to the fact that the angle between the directions corresponding to at least two image clusters is smaller than or equal to the preset threshold value, directionally connecting each acquisition point according to the position of each acquisition point;
and determining the jump relation among the images according to the directional connecting lines among the acquisition points.
17. The apparatus of claim 12, wherein the apparatus further comprises:
a position correction unit configured to correct a position of each acquisition point.
18. The apparatus of claim 17, wherein the position correction unit is further configured to:
performing straight line fitting on acquisition points corresponding to the images in each image cluster;
and correcting the position of each acquisition point according to the position of the projection point of each acquisition point to the fitting straight line.
19. The apparatus of claim 12, wherein the first direction determining unit is further configured to:
and determining the acquisition direction of the image according to the driving direction of the acquired vehicle when the image is acquired.
20. The apparatus of claim 12, wherein the second direction determining unit is further configured to:
and determining the direction corresponding to each image cluster according to the direction of each image in the image clusters.
21. The apparatus of any of claims 12-20, wherein the image comprises a panoramic image.
22. The apparatus of claim 21, wherein the first direction determining unit is further configured to:
and determining the acquisition direction of each panoramic image according to the direction of the first frame image in each panoramic image.
23. An electronic device for processing an image, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-11.
CN202011092196.3A 2020-10-13 2020-10-13 Method, apparatus, device and storage medium for processing image Pending CN112214624A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011092196.3A CN112214624A (en) 2020-10-13 2020-10-13 Method, apparatus, device and storage medium for processing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011092196.3A CN112214624A (en) 2020-10-13 2020-10-13 Method, apparatus, device and storage medium for processing image

Publications (1)

Publication Number Publication Date
CN112214624A true CN112214624A (en) 2021-01-12

Family

ID=74053982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011092196.3A Pending CN112214624A (en) 2020-10-13 2020-10-13 Method, apparatus, device and storage medium for processing image

Country Status (1)

Country Link
CN (1) CN112214624A (en)

Similar Documents

Publication Publication Date Title
CN110118554B (en) SLAM method, apparatus, storage medium and device based on visual inertia
JP2015507860A (en) Guide to image capture
CN110991320A (en) Road condition detection method and device, electronic equipment and storage medium
CN111694973A (en) Model training method and device for automatic driving scene and electronic equipment
CN104182051A (en) Headset intelligent device and interactive system with same
CN111738072A (en) Training method and device of target detection model and electronic equipment
CN111553844A (en) Method and device for updating point cloud
CN111652112A (en) Lane flow direction identification method and device, electronic equipment and storage medium
CN111723768A (en) Method, device, equipment and storage medium for vehicle weight recognition
CN110806215B (en) Vehicle positioning method, device, equipment and storage medium
CN110929639A (en) Method, apparatus, device and medium for determining position of obstacle in image
CN110968718A (en) Target detection model negative sample mining method and device and electronic equipment
US20190266885A1 (en) Control service for controlling devices with body-action input devices
CN112214624A (en) Method, apparatus, device and storage medium for processing image
CN112132113A (en) Vehicle re-identification method and device, training method and electronic equipment
CN112116826A (en) Method and device for generating information
CN112214625A (en) Method, apparatus, device and storage medium for processing image
CN111612851A (en) Method, apparatus, device and storage medium for calibrating camera
CN111721305A (en) Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium
CN110595490B (en) Preprocessing method, device, equipment and medium for lane line perception data
CN111028272A (en) Object tracking method and device
CN110675635B (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
CN110798681B (en) Monitoring method and device of imaging equipment and computer equipment
CN112747758A (en) Road network modeling method and device
CN112489460A (en) Signal lamp information output method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination