CN108132054A - For generating the method and apparatus of information - Google Patents
For generating the method and apparatus of information Download PDFInfo
- Publication number
- CN108132054A CN108132054A CN201711386919.9A CN201711386919A CN108132054A CN 108132054 A CN108132054 A CN 108132054A CN 201711386919 A CN201711386919 A CN 201711386919A CN 108132054 A CN108132054 A CN 108132054A
- Authority
- CN
- China
- Prior art keywords
- image
- information
- detected
- indicator element
- navigation indicator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Automation & Control Theory (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for generating information.One specific embodiment of this method includes:Obtain the image to be detected for showing navigation indicator element;Image analysis is carried out to image to be detected, generates the image feature information of image to be detected;Based on image feature information, at least one candidate region image is extracted from image to be detected;Each candidate region image at least one candidate region image is input to elemental recognition model trained in advance, obtains location information and classification information of the navigation indicator element in image to be detected.It is achieved thereby that the generation of the relevant information of navigation indicator element.
Description
Technical field
This application involves field of computer technology, and in particular to field of image recognition more particularly, to generates information
Method and apparatus.
Background technology
As internet continues to develop, the electronic map based on network application is gradually instead of the application of paper map.Electronics
Map solves the problems such as user carries, preserves, can carry out map scaling, map data update is very fast.User is allowed to search more square
Just, at the same electronic map also gradually this is perfect carrying out, even to this day, electronic map may help to user place and look into
It looks for, calculate route, traffic information, weather lookup.And navigation map is the more product of hommization on the electronic map,
In, the acquisition of information of point of interest is structure navigation map important step.
At present, mainly have for the acquisition modes of the information of point of interest in navigation map:By the image mosaic of acquisition into office
Portion's three-dimensional panoramic image, then by the way of sweeping the streets, the target point of interest in handmarking's scene, so as to obtain target interest
The information of point;Or key element in image is identified by way of automatic identification, extract the features such as texture, the structure of image
Information, the method detection identification for reusing statistical learning obtain the information of target point of interest.
Invention content
The embodiment of the present application proposes the method and apparatus for generating information.
In a first aspect, the embodiment of the present application proposes a kind of method for generating information, this method includes:Obtain display
There is image to be detected of navigation indicator element;Image analysis is carried out to above-mentioned image to be detected, generates above-mentioned image to be detected
Image feature information;Based on above-mentioned image feature information, at least one candidate region image is extracted from above-mentioned image to be detected,
Wherein, there are the probability of above-mentioned navigation indicator element in the image of candidate region to be more than predetermined threshold value;By above-mentioned at least one time
Each candidate region image in the area image of constituency is input to elemental recognition model trained in advance, obtains above-mentioned navigation instruction
Location information and classification information of the element in above-mentioned image to be detected, wherein, above-mentioned elemental recognition model is used to characterize image
With the location information of navigation indicator element and the correspondence of classification information in image.
In some embodiments, above-mentioned acquisition shows image to be detected of navigation indicator element, including:Acquisition is shown
The target image of navigation indicator element;Superresolution processing is carried out to above-mentioned target image, obtains super-resolution image as to be checked
Altimetric image.
In some embodiments, above-mentioned elemental recognition model trains to obtain by following steps:Acquisition is shown
The set of sample image of navigation indicator element and the mark of each sample image are stated, wherein, the mark of sample image includes leading
Navigate location information and classification information of the indicator element in sample image;Using deep learning method, by above-mentioned sample image
Each sample image in set is as input, by the mark corresponding to each sample image in the set of above-mentioned sample image
As output, training obtains elemental recognition model.
In some embodiments, it is above-mentioned obtain location information of the above-mentioned navigation indicator element in above-mentioned image to be detected and
After classification information, further include:Geographical coordinate, above-mentioned navigation based on the image capture device for acquiring above-mentioned image to be detected refer to
Show location information of the element in above-mentioned image to be detected, determine the geographical coordinate of above-mentioned navigation indicator element.
In some embodiments, above-mentioned image feature information includes:Frequency domain information, colouring information, texture information.
Second aspect, the embodiment of the present application propose a kind of device for being used to push warning information, which includes:First
Acquiring unit is configured to obtain the image to be detected for showing navigation indicator element;Analytic unit is configured to treat to above-mentioned
Detection image carries out image analysis, generates the image feature information of above-mentioned image to be detected;Extraction unit is configured to based on upper
Image feature information is stated, at least one candidate region image is extracted from above-mentioned image to be detected, wherein, in candidate region image
The middle probability there are above-mentioned navigation indicator element is more than predetermined threshold value;Recognition unit is configured to above-mentioned at least one candidate
Each candidate region image in area image is input to elemental recognition model trained in advance, obtains above-mentioned navigation instruction member
Location information and classification information of the element in above-mentioned image to be detected, wherein, above-mentioned elemental recognition model for characterize image with
The location information of navigation indicator element and the correspondence of classification information in image.
In some embodiments, above-mentioned first acquisition unit is further configured to:Acquisition shows navigation indicator element
Target image;Superresolution processing is carried out to above-mentioned target image, obtains super-resolution image as image to be detected.
In some embodiments, above device further includes:Second acquisition unit is configured to acquisition and shows above-mentioned navigation
The set of the sample image of indicator element and the mark of each sample image, wherein, the mark of sample image includes navigation and indicates
Location information and classification information of the element in sample image;Training unit, configuration is using deep learning method, by above-mentioned sample
Each sample image in the set of image, will be corresponding to each sample image in the set of above-mentioned sample image as input
Mark as output, training obtain elemental recognition model.
In some embodiments, above device further includes:Determination unit is configured to based on the above-mentioned image to be detected of acquisition
Image capture device location information in above-mentioned image to be detected of geographical coordinate, above-mentioned navigation indicator element, determine
State the geographical coordinate of navigation indicator element.
In some embodiments, above-mentioned image feature information includes:Frequency domain information, colouring information, texture information.
The third aspect, the embodiment of the present application provide a kind of server, which includes:One or more processors;
Storage device, for storing one or more programs, when said one or multiple programs are held by said one or multiple processors
Row so that the method for said one or the realization of multiple processors as described in realization method any in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey
Sequence, which is characterized in that the side as described in realization method any in first aspect is realized when the computer program is executed by processor
Method.
Method and apparatus provided by the embodiments of the present application for generating information, by carrying out figure to above-mentioned image to be detected
As analysis, to generate the image feature information of above-mentioned image to be detected, then based on image feature information, from image to be detected
At least one candidate region image of middle extraction, so as to by each candidate region figure in above-mentioned at least one candidate region image
As being input to elemental recognition model trained in advance, position letter of the above-mentioned navigation indicator element in above-mentioned image to be detected is obtained
Breath and classification information.It is achieved thereby that the generation of the relevant information of navigation indicator element.
Description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart for being used to generate one embodiment of the method for information according to the application;
Fig. 3 is the schematic diagram for being used to generate an application scenarios of the method for information according to the application;
Fig. 4 is the flow chart for being used to generate another embodiment of the method for information according to the application;
Fig. 5 is the structure diagram for being used to generate one embodiment of the device of information according to the application;
Fig. 6 is adapted for the structure diagram of the computer system of the server for realizing the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the method for being used to generate information that can apply the application or the example for generating the device of information
Sexual system framework 100.
As shown in Figure 1, system architecture 100 can include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted with using terminal equipment 101,102,103 by network 104 with server 105, to receive or send out
Send data etc..Various client applications, such as the application of photography and vedio recording class, figure can be installed on terminal device 101,102,103
As the application of processing class, searching class application etc..
Terminal device 101,102,103 can be the various electronic equipments with display screen and supported web page browsing, wrap
It includes but is not limited to smart mobile phone, tablet computer, pocket computer on knee and desktop computer etc..
Server 105 can be to provide the server of various services, such as the figure to the upload of terminal device 101,102,103
As the image processing server handled.Image processing server can be detected and know to the image to be detected received
It Deng not handle, and generate handling result (such as location information and classification information of navigation indicator element).
It should be noted that generally being held for the method that generates information by server 105 of being provided of the embodiment of the present application
Row, correspondingly, the device for generating information is generally positioned in server 105.
It should be pointed out that the local of server 105 can also directly store image to be detected, server 105 can be straight
The local image to be detected of extraction is connect to be handled, at this point, exemplary system architecture 100 can be not present terminal device 101,
102nd, 103 and network 104.Image processing class application can also be installed in terminal device 101,102,103, it is possible thereby to be based on
Image processing class is applied and image to be detected is handled.It at this point, can also be by terminal device for generating the method for information
101st, it 102,103 performs, correspondingly, the device for generating information can also be set in terminal device 101,102,103.This
When, server 105 and network 104 can be not present in exemplary system architecture 100.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need
Will, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the flow for being used to generate one embodiment of the method for information according to the application is shown
200.The above-mentioned method for generating information includes the following steps:
Step 201, the image to be detected for showing navigation indicator element is obtained.
In the present embodiment, for generating electronic equipment (such as the service shown in FIG. 1 of the method for information operation thereon
Device 105) it can directly extract and be stored in local image to be detected, it is (such as shown in FIG. 1 that other electronic equipments can also be received
Terminal device 101,102 and the image to be detected 103) sent by wired connection mode or radio connection.Wherein, it is above-mentioned
Image to be detected can be adopted using various image capture devices (such as single-lens reflex camera, industrial camera, smart mobile phone etc.)
Image collecting, showing navigation indicator element.Above-mentioned navigation indicator element can refer to play in navigation procedure indicative
The element of effect, as an example, navigation indicator element can include but is not limited to the camera of electronic eye system, traffic instruction
Board, traffic lights etc..Above-mentioned radio connection can include but is not limited to 3G/4G connections, WiFi connections, bluetooth connection,
WiMAX connections, Zigbee connections, UWB (ultra wideband) connections and other it is currently known or in the future exploitation it is wireless
Connection mode.
Step 202, image analysis is carried out to image to be detected, generates the image feature information of image to be detected.
In the present embodiment, above-mentioned electronic equipment can carry out image to be detected image analysis, to be detected so as to generate
The image feature information of image.Wherein, above-mentioned image feature information can be the information for characterizing the various features of image.As
Example, above-mentioned electronic equipment can utilize boundary characteristic method to obtain for characterizing the information of the shape feature of image.
In some optional realization methods of the present embodiment, above-mentioned image feature information can include following at least one
:Frequency domain information, colouring information, texture information.Wherein, frequency domain information can be that the use that Fourier transformation obtains is carried out to image
In the information for characterizing grey scale change severe degree in image.Colouring information can be for describing corresponding to image or image-region
Scenery surface nature information, color histogram, color set, color moment, color convergence vector sum color phase can be passed through
Pass figure characterizes it.Texture information can be the letter for describing the spatial color distribution of image or image-region and light distribution
Breath.It should be noted that above-mentioned image feature information is not limited to listed above, other information can also be included.
Step 203, based on image feature information, at least one candidate region image is extracted from image to be detected.
In the present embodiment, above-mentioned electronic equipment can the image feature information based on above-mentioned image to be detected, to be checked
At least one candidate region image is extracted in altimetric image.Wherein, there are above-mentioned navigation indicator elements in the image of candidate region
Probability is more than predetermined threshold value.First, above-mentioned electronic equipment can be according to the image feature information of image to be detected, will be above-mentioned to be checked
It is (corresponding for example, the sky of large area is the slow region of a piece of grey scale change in the picture that altimetric image is divided into multiple regions
Frequency values are very low, can be divided into a region), and the probability that there is navigation indicator element to each region carries out in advance
It surveys, obtains the prediction probability that each region has navigation indicator element.Then, above-mentioned electronic equipment can filter out above-mentioned prediction
Probability is less than the region (such as filtering out sky areas in image) of predetermined threshold value, extracts prediction probability not less than predetermined threshold value
At least one region.Finally, at least one region extracted can be determined as candidate region by above-mentioned electronic equipment, raw
Into at least one candidate region image.
Step 204, each candidate region image at least one candidate region image is input to training in advance
Elemental recognition model obtains location information and classification information of the navigation indicator element in image to be detected.
In the present embodiment, above-mentioned electronic equipment can be effective by each in above-mentioned at least one effective coverage image
Area image is input to elemental recognition model trained in advance, obtains the position of the navigation indicator element in above-mentioned image to be detected
Information and classification information.Wherein, above-mentioned location information can be the image coordinate of above-mentioned navigation indicator element.Above-mentioned image coordinate
Can refer to using the center of the plane of delineation as coordinate origin, X-axis and Y-axis be respectively parallel to two vertical edges of the plane of delineation
Image coordinate system in coordinate.Above-mentioned classification information can be the information that description navigation indicator element corresponds to classification.As showing
Example, according to the difference of above-mentioned navigation indicator element, the indicator element that navigates can correspond to one of following classification:Traffic lamp & lantern,
Traffic sign class, electronic eye system class.Optionally, above-mentioned navigation indicator element can be the camera of electronic eye system, on
It can be that (for example, public security bayonet class, traffic monitoring class is super for the corresponding electronic eye system classification of description camera to state classification information
Speed capture class) information.
It should be noted that above-mentioned elemental recognition model is used to characterize the position of image and the navigation indicator element in image
The correspondence of information and classification information.As an example, above-mentioned elemental recognition model can be based on to largely showing navigation
The image of indicator element and navigation indicator element location information in the picture and classification information are counted and generate and be stored with
Multiple images for showing navigation indicator element are corresponding with navigate indicator element location information in the picture and classification information
The mapping table of relationship.
In some optional realization methods of the present embodiment, above-mentioned elemental recognition model can also be used to existing figure
The mode being trained as identification model obtains.For example, above-mentioned elemental recognition model can utilize machine learning method, and base
Supervision has been carried out to existing convolutional neural networks (such as DenseBox, VGGNet, ResNet, SegNet etc.) in training sample
Obtained from training.Above-mentioned electronic equipment can train above-mentioned convolutional neural networks to obtain elemental recognition according to following steps in advance
Model:First, above-mentioned electronic equipment can obtain the set for the sample image for showing above-mentioned navigation indicator element and each sample
The mark of this image, wherein, each sample image that the mark of sample image is included in the set of navigation is first as input instruction
Location information and classification information of the element in sample image.Then, above-mentioned electronic equipment can utilize deep learning method, will be upper
Sample image is stated, using the mark corresponding to each sample image in the set of above-mentioned sample image as output, to above-mentioned volume
Product neural network is trained, and obtains above-mentioned elemental recognition model.It should be noted that above-mentioned convolutional neural networks can include
For extracting multiple convolutional layers of the feature of candidate region image.In practice, convolutional neural networks (Convolutional
Neural Network, CNN) it is a kind of feedforward neural network, its artificial neuron can be responded in a part of coverage area
Surrounding cells, for image procossing have outstanding performance, being carried therefore, it is possible to carry out characteristics of image using convolutional neural networks
It takes.Wherein, the feature of above-mentioned candidate region image can include for the navigation indicator element in each effective coverage image
The location parameter that the position at place is characterized can also include for the navigation indicator element in each candidate region image
The classification parameter that is characterized of classification.
With continued reference to Fig. 3, Fig. 3 is to be illustrated according to the present embodiment for generating one of the application scenarios of the method for information
Figure.In the application scenarios of Fig. 3, first, used terminal device 301 (such as laptop) transmission of user shows navigation and refers to
Show that image to be detected 303 of element (such as camera of electronic eye system) arrives server 302.Then, server 302 is to be checked
Altimetric image 303 carries out image analysis, obtains image feature information 304.Later, server 302 is based on image feature information 304,
The candidate region image that at least one probability that there is navigation indicator element is more than predetermined threshold value is extracted from detection image 303
305.Finally, at least one candidate region image 305 is input to elemental recognition model trained in advance by server 302, is obtained
Classification information (such as classification information of the camera of electronic eye system) 306 Hes of navigation indicator element in image to be detected 303
Location information (such as image coordinate) 307.
The method that above-described embodiment of the application provides is realized by extracting candidate region and the method using machine learning
The generation of the relevant information of navigation indicator element.
With continued reference to Fig. 4, the flow for being used to generate another embodiment of the method for information according to the application is shown
400.The above-mentioned method for generating information includes the following steps:
Step 401, the target image for showing navigation indicator element is obtained.
In the present embodiment, for generating electronic equipment (such as the service shown in FIG. 1 of the method for information operation thereon
Device 105) it can directly extract and be stored in local target image, other electronic equipments (such as end shown in FIG. 1 can also be received
End equipment 101,102 and the target image 103) sent by wired connection mode or radio connection.Wherein, above-mentioned target
Image can be using various image capture devices (such as single-lens reflex camera, industrial camera, smart mobile phone etc.) acquisition, show
It is shown with the image of navigation indicator element.
Step 402, superresolution processing is carried out to target image, obtains super-resolution image as image to be detected.
In the present embodiment, above-mentioned electronic equipment can utilize existing super-resolution processing technology (such as certainty weight
Construction method, Regularization method for reconstructing, nonuniform space sample interpolation method, iterative backprojection method, sets theory method for reconstructing
Deng) superresolution processing is carried out to above-mentioned target image, super-resolution image is obtained as image to be detected.
Step 403, step 404, step 405 are identical with step 202, step 203, the step 204 in Fig. 2 respectively, herein
It repeats no more.
Step 406, geographical coordinate, above-mentioned navigation instruction member based on the image capture device for acquiring above-mentioned image to be detected
Location information of the element in above-mentioned image to be detected determines the geographical coordinate of above-mentioned navigation indicator element.
In the present embodiment, above-mentioned electronic equipment can the ground based on the image capture device for acquiring above-mentioned image to be detected
The location information of coordinate, above-mentioned navigation indicator element in above-mentioned image to be detected is managed, determines the ground of above-mentioned navigation indicator element
Manage coordinate.Specifically, first, above-mentioned electronic equipment can utilize image coordinate system and camera coordinates system in camera imaging principle
Between mapping relations, the image coordinate of above-mentioned navigation indicator element is converted into camera coordinates.Then, above-mentioned electronic equipment can
To utilize the mapping relations between camera coordinates system and the world coordinate system in camera model, by the phase of above-mentioned navigation indicator element
Machine coordinate is converted to the world coordinates of above-mentioned navigation indicator element.Finally, above-mentioned electronic equipment can utilize above-mentioned navigation to indicate
The world coordinates of element obtains the actual positional relationship of image capture device and above-mentioned navigation indicator element when acquiring image, and
Geographical coordinate of the above-mentioned navigation indicator element in geographic coordinate system is obtained using above-mentioned actual positional relationship.Wherein, above-mentioned phase
Machine model is a kind of mould for the mathe-matical map process that object is transformed into 2D image coordinate systems for description from 3D world coordinate systems
Type.Wherein, above-mentioned camera coordinates system is using the optical center of camera as coordinate origin, and X-axis, Y-axis are put down respectively with the X-axis of image, Y-axis
Row, Z axis are the rectangular coordinate system of camera optical axis.Above-mentioned world coordinate system can be referred to for characterizing image capture device and navigation
Show the three-dimensional system of coordinate of the position of element.
Figure 4, it is seen that compared with the corresponding embodiments of Fig. 2, in the present embodiment for the method that generates information
Flow 400 highlight to image to be detected acquisition be extended the step of, and further construct determine navigation indicator element
Geographical coordinate the step of.So as to fulfill the generation of the more fully relevant information of navigation indicator element.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating letter
One embodiment of the device of breath, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the present embodiment includes for generating the device 500 of information:First acquisition unit 501, analysis are single
Member 502, extraction unit 503 and recognition unit 504.Wherein, first acquisition unit 501 shows navigation indicator element for obtaining
Image to be detected;Analytic unit 502 is used to carry out image analysis to image to be detected, generates the characteristics of image of image to be detected
Information;Extraction unit 503 is used to, based on image feature information, at least one candidate region image is extracted from image to be detected,
Wherein, there are the probability of above-mentioned navigation indicator element in the image of candidate region to be more than predetermined threshold value;Recognition unit 504 is used for will
Each candidate region image in above-mentioned at least one candidate region image is input to elemental recognition model trained in advance, obtains
To location information and classification information of the above-mentioned navigation indicator element in above-mentioned image to be detected, wherein, above-mentioned elemental recognition mould
Type is used to characterize image and the location information of navigation indicator element and the correspondence of classification information in image.
In the present embodiment, for generate the first acquisition unit 501 of the device 500 of information, analytic unit 502, extraction
The specific processing of unit 503 and recognition unit 504 and its caused technique effect can be respectively with reference to walking in 2 corresponding embodiment of figure
Rapid 201, the related description of step 202, step 203 and step 204, details are not described herein.
In some optional realization methods of the present embodiment, above-mentioned first acquisition unit is further configured to:It obtains
Show the target image of navigation indicator element;Superresolution processing is carried out to above-mentioned target image, obtains super-resolution image work
For image to be detected.
In some optional realization methods of the present embodiment, above device 500 can further include:Second acquisition unit
(not shown) is configured to obtain preset training sample, wherein, above-mentioned training sample refers to including showing above-mentioned navigation
Show the set of the sample image of element and the mark of each sample image, wherein, above-mentioned mark includes above-mentioned navigation indicator element
Location information and classification information in above-mentioned image to be detected;Training unit (not shown) is configured to utilize depth
Learning method, using above-mentioned training sample as input, the location information and classification information of above-mentioned navigation indicator element are used as output,
Training obtains elemental recognition model.
In some optional realization methods of the present embodiment, above device 500 can further include:Determination unit is (in figure
It is not shown), it is configured to based on geographical coordinate of the image capture device obtained in advance when acquiring image to be detected, above-mentioned leads
Navigate location information of the indicator element in above-mentioned image to be detected, determines the geographical coordinate of above-mentioned navigation indicator element.
In some optional realization methods of the present embodiment, above-mentioned image feature information can include following at least one
:Frequency domain information, colouring information, texture information.Wherein, frequency domain information can be that the use that Fourier transformation obtains is carried out to image
In the information for characterizing grey scale change severe degree in image.Colouring information can be for describing corresponding to image or image-region
Scenery surface nature information, color histogram, color set, color moment, color convergence vector sum color phase can be passed through
Pass figure characterizes it.Texture information can be the letter for describing the spatial color distribution of image or image-region and light distribution
Breath.It should be noted that above-mentioned image feature information is not limited to listed above, other information can also be included.
Below with reference to Fig. 6, it illustrates suitable for being used for realizing the computer system 600 of the server of the embodiment of the present application
Structure diagram.Server shown in Fig. 6 is only an example, should not be to the function of the embodiment of the present application and use scope band
Carry out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU, Central Processing Unit)
601, it can be according to the program being stored in read-only memory (ROM, Read Only Memory) 602 or from storage section
606 programs being loaded into random access storage device (RAM, Random Access Memory) 603 and perform it is various appropriate
Action and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.CPU 601、ROM
602 and RAM 603 is connected with each other by bus 604.Input/output (I/O, Input/Output) interface 605 is also connected to
Bus 604.
I/O interfaces 605 are connected to lower component:Storage section 606 including hard disk etc.;And including such as LAN (locals
Net, Local Area Network) card, modem etc. network interface card communications portion 607.Communications portion 607 passes through
Communication process is performed by the network of such as internet.Driver 608 is also according to needing to be connected to I/O interfaces 605.Detachable media
609, such as disk, CD, magneto-optic disk, semiconductor memory etc., as needed be mounted on driver 608 on, in order to from
The computer program read thereon is mounted into storage section 606 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product, including being carried on computer-readable medium
On computer program, which includes for the program code of the method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 607 and/or from detachable media
609 are mounted.When the computer program is performed by central processing unit (CPU) 601, perform what is limited in the present processes
Above-mentioned function.It should be noted that the above-mentioned computer-readable medium of the application can be computer-readable signal media or
Computer readable storage medium either the two arbitrarily combines.Computer readable storage medium for example can be --- but
It is not limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor or arbitrary above combination.
The more specific example of computer readable storage medium can include but is not limited to:Electrical connection with one or more conducting wires,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium can any be included or store
The tangible medium of program, the program can be commanded the either device use or in connection of execution system, device.And
In the application, computer-readable signal media can include the data letter propagated in a base band or as a carrier wave part
Number, wherein carrying computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by instruction execution system, device either device use or program in connection.It is included on computer-readable medium
Program code any appropriate medium can be used to transmit, including but not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart and block diagram in attached drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.In this regard, each box in flow chart or block diagram can generation
The part of one module of table, program segment or code, the part of the module, program segment or code include one or more use
In the executable instruction of logic function as defined in realization.It should also be noted that it in some implementations as replacements, is marked in box
The function of note can also be occurred with being different from the sequence marked in attached drawing.For example, two boxes succeedingly represented are actually
It can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and/or flow chart and the box in block diagram and/or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be set in the processor, for example, can be described as:A kind of processor packet
Include first acquisition unit, the first execution unit and the second execution unit.Wherein, the title of these units is not under certain conditions
The restriction to the unit in itself is formed, " acquisition shows navigation indicator element for example, first acquisition unit is also described as
Image to be detected unit ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are performed by the device so that should
Device:Obtain the image to be detected for showing navigation indicator element;Image analysis is carried out to above-mentioned image to be detected, generation is above-mentioned
The image feature information of image to be detected;Based on above-mentioned image feature information, extracted from above-mentioned image to be detected at least one
Candidate region image, wherein, there are the probability of above-mentioned navigation indicator element in the image of candidate region to be more than predetermined threshold value;It will be upper
It states each candidate region image at least one candidate region image and is input to elemental recognition model trained in advance, obtain
Location information and classification information of the above-mentioned navigation indicator element in above-mentioned image to be detected, wherein, above-mentioned elemental recognition model
For characterizing image and the location information of navigation indicator element and the correspondence of classification information in image.
The preferred embodiment and the explanation to institute's application technology principle that above description is only the application.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the specific combination of above-mentioned technical characteristic forms
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
The other technical solutions for arbitrarily combining and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein
The technical solution that the technical characteristic of energy is replaced mutually and formed.
Claims (12)
1. a kind of method for generating information, including:
Obtain the image to be detected for showing navigation indicator element;
Image analysis is carried out to described image to be detected, generates the image feature information of described image to be detected;
Based on described image characteristic information, at least one candidate region image is extracted from described image to be detected, wherein, it is waiting
Probability in the area image of constituency there are the navigation indicator element is more than predetermined threshold value;
Each candidate region image at least one candidate region image is input to elemental recognition trained in advance
Model obtains location information and classification information of the navigation indicator element in described image to be detected, wherein, the element
Identification model is used to characterize image and the location information of navigation indicator element and the correspondence of classification information in image.
2. according to the method described in claim 1, wherein, the acquisition shows image to be detected of navigation indicator element, packet
It includes:
Obtain the target image for showing navigation indicator element;
Superresolution processing is carried out to the target image, obtains super-resolution image as image to be detected.
3. according to the method described in claim 1, wherein, the elemental recognition model trains to obtain by following steps:
The set of sample image and the mark of each sample image for showing the navigation indicator element are obtained, wherein, sample
The mark of image includes location information and classification information of the indicator element in sample image that navigate;
Using deep learning method, using each sample image in the set of the sample image as input, by the sample
For mark corresponding to each sample image in the set of image as output, training obtains elemental recognition model.
It is 4. described to obtain the navigation indicator element in described image to be detected according to the method described in claim 1, wherein
Location information and classification information after, further include:
Geographical coordinate based on the image capture device for acquiring described image to be detected, the navigation indicator element are described to be checked
Location information in altimetric image determines the geographical coordinate of the navigation indicator element.
5. according to the method described in one of claim 1-4, wherein, described image characteristic information includes:Frequency domain information, color letter
Breath, texture information.
6. it is a kind of for generating the device of information, including:
First acquisition unit is configured to obtain the image to be detected for showing navigation indicator element;
Analytic unit is configured to carry out image analysis to described image to be detected, and the image for generating described image to be detected is special
Reference ceases;
Extraction unit is configured to, based on described image characteristic information, at least one candidate be extracted from described image to be detected
Area image, wherein, the probability there are the navigation indicator element in the image of candidate region is more than predetermined threshold value;
Recognition unit is configured to each candidate region image at least one candidate region image being input to pre-
First trained elemental recognition model obtains location information and classification letter of the navigation indicator element in described image to be detected
Breath, wherein, the elemental recognition model is used to characterize the location information of image and the navigation indicator element in image and classification is believed
The correspondence of breath.
7. device according to claim 6, wherein, the first acquisition unit is further configured to:
Obtain the target image for showing navigation indicator element;
Superresolution processing is carried out to the target image, obtains super-resolution image as image to be detected.
8. device according to claim 6, wherein, described device further includes:
Second acquisition unit is configured to obtain the set for the sample image for showing the navigation indicator element and each sample
The mark of image, wherein, the mark of sample image includes location information and classification letter of the indicator element in sample image that navigate
Breath;
Training unit is configured to, using deep learning method, each sample image in the set of the sample image be made
For input, using the mark corresponding to each sample image in the set of the sample image as output, training obtains element
Identification model.
9. device according to claim 6, wherein, described device further includes:
Determination unit is configured to the geographical coordinate based on the image capture device for acquiring described image to be detected, the navigation
Location information of the indicator element in described image to be detected determines the geographical coordinate of the navigation indicator element.
10. according to the device described in one of claim 6-9, wherein, described image characteristic information includes:Frequency domain information, color
Information, texture information.
11. a kind of server, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are performed by one or more of processors so that one or more of processors are real
The now method as described in any in claim 1-5.
12. a kind of computer readable storage medium, is stored thereon with computer program, wherein, when which is executed by processor
Realize the method as described in any in claim 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711386919.9A CN108132054A (en) | 2017-12-20 | 2017-12-20 | For generating the method and apparatus of information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711386919.9A CN108132054A (en) | 2017-12-20 | 2017-12-20 | For generating the method and apparatus of information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108132054A true CN108132054A (en) | 2018-06-08 |
Family
ID=62390983
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711386919.9A Pending CN108132054A (en) | 2017-12-20 | 2017-12-20 | For generating the method and apparatus of information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108132054A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108548539A (en) * | 2018-06-28 | 2018-09-18 | Oppo广东移动通信有限公司 | Air navigation aid and device based on image recognition, terminal, readable storage medium storing program for executing |
CN109064464A (en) * | 2018-08-10 | 2018-12-21 | 北京百度网讯科技有限公司 | Method and apparatus for detecting battery pole piece burr |
CN109857880A (en) * | 2019-01-16 | 2019-06-07 | 创新奇智(宁波)科技有限公司 | A kind of data processing method based on model, device and electronic equipment |
CN109872392A (en) * | 2019-02-19 | 2019-06-11 | 北京百度网讯科技有限公司 | Man-machine interaction method and device based on high-precision map |
CN112132853A (en) * | 2020-11-30 | 2020-12-25 | 湖北亿咖通科技有限公司 | Method and device for constructing ground guide arrow, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101067557A (en) * | 2007-07-03 | 2007-11-07 | 北京控制工程研究所 | Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle |
CN104573715A (en) * | 2014-12-30 | 2015-04-29 | 百度在线网络技术(北京)有限公司 | Recognition method and device for image main region |
CN106650641A (en) * | 2016-12-05 | 2017-05-10 | 北京文安智能技术股份有限公司 | Traffic light positioning and identification method, device and system |
CN106851046A (en) * | 2016-12-28 | 2017-06-13 | 中国科学院自动化研究所 | Video dynamic super-resolution processing method and system |
CN107290738A (en) * | 2017-06-27 | 2017-10-24 | 清华大学苏州汽车研究院(吴江) | A kind of method and apparatus for measuring front vehicles distance |
CN107305684A (en) * | 2016-04-18 | 2017-10-31 | 瑞萨电子株式会社 | Image processing system, image processing method and picture transmitter device |
-
2017
- 2017-12-20 CN CN201711386919.9A patent/CN108132054A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101067557A (en) * | 2007-07-03 | 2007-11-07 | 北京控制工程研究所 | Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle |
CN104573715A (en) * | 2014-12-30 | 2015-04-29 | 百度在线网络技术(北京)有限公司 | Recognition method and device for image main region |
CN107305684A (en) * | 2016-04-18 | 2017-10-31 | 瑞萨电子株式会社 | Image processing system, image processing method and picture transmitter device |
CN106650641A (en) * | 2016-12-05 | 2017-05-10 | 北京文安智能技术股份有限公司 | Traffic light positioning and identification method, device and system |
CN106851046A (en) * | 2016-12-28 | 2017-06-13 | 中国科学院自动化研究所 | Video dynamic super-resolution processing method and system |
CN107290738A (en) * | 2017-06-27 | 2017-10-24 | 清华大学苏州汽车研究院(吴江) | A kind of method and apparatus for measuring front vehicles distance |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108548539A (en) * | 2018-06-28 | 2018-09-18 | Oppo广东移动通信有限公司 | Air navigation aid and device based on image recognition, terminal, readable storage medium storing program for executing |
CN109064464A (en) * | 2018-08-10 | 2018-12-21 | 北京百度网讯科技有限公司 | Method and apparatus for detecting battery pole piece burr |
CN109857880A (en) * | 2019-01-16 | 2019-06-07 | 创新奇智(宁波)科技有限公司 | A kind of data processing method based on model, device and electronic equipment |
CN109857880B (en) * | 2019-01-16 | 2022-04-05 | 创新奇智(上海)科技有限公司 | Model-based data processing method and device and electronic equipment |
CN109872392A (en) * | 2019-02-19 | 2019-06-11 | 北京百度网讯科技有限公司 | Man-machine interaction method and device based on high-precision map |
CN109872392B (en) * | 2019-02-19 | 2023-08-25 | 阿波罗智能技术(北京)有限公司 | Man-machine interaction method and device based on high-precision map |
CN112132853A (en) * | 2020-11-30 | 2020-12-25 | 湖北亿咖通科技有限公司 | Method and device for constructing ground guide arrow, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108132054A (en) | For generating the method and apparatus of information | |
CN110400363A (en) | Map constructing method and device based on laser point cloud | |
CN109753928A (en) | The recognition methods of architecture against regulations object and device | |
EP3605394A1 (en) | Method and apparatus for recognizing body movement | |
CN108154196B (en) | Method and apparatus for exporting image | |
Workman et al. | A unified model for near and remote sensing | |
CN103679674B (en) | Method and system for splicing images of unmanned aircrafts in real time | |
US20190130603A1 (en) | Deep-learning based feature mining for 2.5d sensing image search | |
JP2019514123A (en) | Remote determination of the quantity stored in containers in geographical areas | |
EP2849117B1 (en) | Methods, apparatuses and computer program products for automatic, non-parametric, non-iterative three dimensional geographic modeling | |
CN109029381A (en) | A kind of detection method of tunnel slot, system and terminal device | |
CN105141924B (en) | Wireless image monitoring system based on 4G technologies | |
CN105120237B (en) | Wireless image monitoring method based on 4G technologies | |
US9208555B1 (en) | Method for inspection of electrical equipment | |
CN110428490B (en) | Method and device for constructing model | |
Hebbalaguppe et al. | Telecom Inventory management via object recognition and localisation on Google Street View Images | |
Hijji et al. | 6G connected vehicle framework to support intelligent road maintenance using deep learning data fusion | |
CN108133197A (en) | For generating the method and apparatus of information | |
CN115375868B (en) | Map display method, remote sensing map display method, computing device and storage medium | |
Feng et al. | A novel saliency detection method for wild animal monitoring images with WMSN | |
CN112668675B (en) | Image processing method and device, computer equipment and storage medium | |
CN108038473A (en) | Method and apparatus for output information | |
CN109903308A (en) | For obtaining the method and device of information | |
CN108197563A (en) | For obtaining the method and device of information | |
CN116561240A (en) | Electronic map processing method, related device and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180608 |
|
RJ01 | Rejection of invention patent application after publication |