CN109903308A - For obtaining the method and device of information - Google Patents

For obtaining the method and device of information Download PDF

Info

Publication number
CN109903308A
CN109903308A CN201711297504.4A CN201711297504A CN109903308A CN 109903308 A CN109903308 A CN 109903308A CN 201711297504 A CN201711297504 A CN 201711297504A CN 109903308 A CN109903308 A CN 109903308A
Authority
CN
China
Prior art keywords
frame
information
processed
point cloud
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711297504.4A
Other languages
Chinese (zh)
Other versions
CN109903308B (en
Inventor
张晔
王军
王昊
王亮
张立志
毛继明
朱晓星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201711297504.4A priority Critical patent/CN109903308B/en
Publication of CN109903308A publication Critical patent/CN109903308A/en
Application granted granted Critical
Publication of CN109903308B publication Critical patent/CN109903308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the present application discloses the method and device for obtaining information.One specific embodiment of this method includes: to obtain image frame sequence to be processed, above-mentioned image frame sequence to be processed includes multiple picture frames to be processed, picture frame to be processed includes the markup information being labeled through perception algorithm to the subject image in picture frame to be processed, and markup information is used to indicate the position of subject image;Picture frame to be processed is extracted from image frame sequence to be processed, each subject image for including for picture frame to be processed, the location information sequence for the correspondence image frame sequence to be processed for determining location information of the subject image on the picture frame to be processed by the markup information that picture frame to be processed includes, and then obtaining the subject image;The velocity information of object corresponding with subject image is calculated by location information sequence.The embodiment can be evaluated and be optimized to perception algorithm by obtained location information and velocity information, and the accuracy of perception algorithm mark is improved.

Description

For obtaining the method and device of information
Technical field
The invention relates to technical field of data processing, and in particular to field of computer technology, more particularly, to Obtain the method and device of information.
Background technique
With the development of information technology, people can get specific information by various information acquiring patterns, with full The needs of sufficient work and life.People can pick out the letter for meeting sets requirement by perception algorithm from the information of magnanimity Breath.In order to get the information of needs, need to be trained perception algorithm.In general, to the process of perception algorithm training are as follows: first First, manually information is labeled;Then, the process manually marked is learnt by perception algorithm, so that perception algorithm energy Enough independent markup informations.
Summary of the invention
The purpose of the embodiment of the present application is to propose the method and device for obtaining information.
In a first aspect, the embodiment of the present application provides a kind of method for obtaining information, this method comprises: obtaining wait locate Image frame sequence is managed, above-mentioned image frame sequence to be processed includes multiple picture frames to be processed, and picture frame to be processed includes through perceiving The markup information that algorithm is labeled the subject image in picture frame to be processed, above-mentioned markup information are used to indicate subject image Position;Picture frame to be processed is extracted from above-mentioned image frame sequence to be processed, each for including for picture frame to be processed Subject image determines position of the subject image on the picture frame to be processed by the markup information that picture frame to be processed includes Information, and then the location information sequence for the correspondence image frame sequence to be processed for obtaining the subject image;Pass through location information sequence Calculate the velocity information of object corresponding with subject image.
In some embodiments, when above-mentioned picture frame to be processed includes the first of pixel image frame and respective pixel picture frame Between stab, and, determine the subject image on the picture frame to be processed above by the markup information that picture frame to be processed includes Location information include: to establish plane right-angle coordinate, pixel image frame is arranged in the setting of above-mentioned plane right-angle coordinate Region, the abscissa and ordinate of above-mentioned plane right-angle coordinate indicate distance;Each object for including for pixel image frame The first mark point is arranged in body image in subject image, and above-mentioned first mark point is for marking object corresponding with subject image On position of first setting position in pixel image frame;By coordinate of first mark point on above-mentioned plane right-angle coordinate It is worth the position value as subject image on the pixel image frame, the first time of the pixel image frame stabs the corresponding time and makees For time value corresponding with above-mentioned position value, above-mentioned position value and time valued combinations are waited locating at this at subject image Manage the location information on picture frame.
In some embodiments, the velocity information of object corresponding with subject image is calculated above by location information sequence It include: that the first collection period for determining pixel image frame is stabbed by the first time between pixel image frame;For pixel image Each subject image that frame includes, in the location information sequence by the corresponding subject image two adjacent location informations and First collection period determines First Speed information of the corresponding object of the subject image in the first collection period, and then obtains pair Answer the First Speed information sequence of location information sequence.
In some embodiments, the velocity information of object corresponding with subject image is calculated above by location information sequence Further include: the adjacent corresponding First Speed of First Speed information is poor in calculating First Speed information sequence, if First Speed is poor Greater than the first setting speed threshold value, then using the corresponding two pixel image frames of First Speed difference as characteristic image frame.
In some embodiments, above-mentioned picture frame to be processed include point cloud chart as frame and corresponding point cloud chart as frame second when Between stab, point cloud chart includes point cloud data as frame, and point cloud data is used to describe space object by three-dimensional coordinate point, and, it is above-mentioned Determine that location information of the subject image on the picture frame to be processed includes: by the markup information that picture frame to be processed includes By point cloud chart as the point cloud data that frame includes constructs virtual three-dimensional space;Markup information is directed toward corresponding with subject image The point cloud data of space object is determined as marking point cloud data;It is arranged on the space object indicated by above-mentioned mark point cloud data Second mark point, using the corresponding point cloud data of the second mark point as space object in point cloud chart as the location information on frame, on State the second mark point for the second setting position on label space object in point cloud chart as the position in frame.
In some embodiments, the velocity information of object corresponding with subject image is calculated above by location information sequence It include: the second collection period that point cloud chart is determined as frame as the second timestamp between frame by point cloud chart;For point cloud chart picture Each space object that frame includes, in the location information sequence by the corresponding space object two adjacent location informations and Second collection period determines second speed information of the corresponding object of the space object in the second collection period, and then obtains pair Answer the second speed information sequence of location information sequence.
In some embodiments, the velocity information of object corresponding with subject image is calculated above by location information sequence Further include: the adjacent corresponding second speed of second speed information is poor in calculating second speed information sequence, if second speed is poor Greater than the second setting speed threshold value, then using corresponding two point cloud charts of second speed difference as frame is as characteristic image frame.
In some embodiments, the above method further includes the steps that optimizing perception algorithm, above-mentioned to perception algorithm The step of optimizing includes: to obtain the corresponding markup information of each characteristic image frame and scenetype information, above-mentioned scene class Type information includes blocking scene type, erroneous detection scene type, missing inspection scene type;Using machine learning method, by characteristic image The input of frame perceptually algorithm, by scenetype information corresponding with characteristic image frame and markup information perceptually algorithm The perception algorithm after being optimized is trained in output
Second aspect, the embodiment of the present application provide a kind of for obtaining the device of information, which includes: data acquisition Unit, for obtaining image frame sequence to be processed, above-mentioned image frame sequence to be processed includes multiple picture frames to be processed, to be processed Picture frame includes the markup information being labeled through perception algorithm to the subject image in picture frame to be processed, above-mentioned markup information It is used to indicate the position of subject image;Location information acquiring unit, for being extracted to from from above-mentioned image frame sequence to be processed Manage picture frame, for each subject image that picture frame to be processed includes, the markup information for including by picture frame to be processed The correspondence image to be processed for determining location information of the subject image on the picture frame to be processed, and then obtaining the subject image The location information sequence of frame sequence;Velocity information acquiring unit, it is corresponding with subject image for being calculated by location information sequence Object velocity information.
In some embodiments, when above-mentioned picture frame to be processed includes the first of pixel image frame and respective pixel picture frame Between stab, and, above-mentioned location information acquiring unit include: pixel image frame setting subelement, for establishing plane rectangular coordinates Pixel image frame, is arranged in the setting regions of above-mentioned plane right-angle coordinate, the abscissa of above-mentioned plane right-angle coordinate by system Distance is indicated with ordinate;First mark point setting subelement, each subject image for including for pixel image frame, First mark point is set in subject image, and above-mentioned first mark point is used to mark first on object corresponding with subject image Position of the setting position in pixel image frame;First location information obtains subelement, for equalling the first mark point above-mentioned Position value of the coordinate value that face rectangular co-ordinate is fastened as subject image on the pixel image frame, the of the pixel image frame The one timestamp corresponding time as time value corresponding with above-mentioned position value, by above-mentioned position value and time value group Location information of the synthetic body image on the picture frame to be processed.
In some embodiments, above-mentioned velocity information acquiring unit includes: that the first collection period obtains subelement, for leading to The first time crossed between pixel image frame stabs the first collection period for determining pixel image frame;First Speed information sequence obtains Subelement, each subject image for including for pixel image frame pass through the location information sequence of the corresponding subject image Two adjacent location informations and the first collection period determine the corresponding object of the subject image in the first collection period in column First Speed information, and then obtain the First Speed information sequence of corresponding position information sequence.
In some embodiments, above-mentioned velocity information acquiring unit further include: calculate adjacent in First Speed information sequence The corresponding First Speed of First Speed information it is poor, if First Speed difference be greater than the first setting speed threshold value, by First Speed The corresponding two pixel image frames of difference are as characteristic image frame.
In some embodiments, above-mentioned picture frame to be processed include point cloud chart as frame and corresponding point cloud chart as frame second when Between stab, point cloud chart includes point cloud data as frame, and point cloud data is used to describe space object by three-dimensional coordinate point, and, it is above-mentioned Location information acquiring unit includes: virtual three-dimensional space building subelement, the point cloud data for including as frame by point cloud chart Construct virtual three-dimensional space;Mark point cloud data determines subelement, corresponding with subject image for be directed toward markup information The point cloud data of space object is determined as marking point cloud data;Second location information obtains subelement, in above-mentioned mark point Second mark point is set on space object indicated by cloud data, using the corresponding point cloud data of the second mark point as space object In point cloud chart as the location information on frame, above-mentioned second mark point is for the second setting position on label space object in a cloud Position in picture frame.
In some embodiments, above-mentioned velocity information acquiring unit includes: that the second collection period obtains subelement, for leading to Cross the second collection period that point cloud chart determines point cloud chart as frame as the second timestamp between frame;Second speed information sequence obtains Subelement, each space object for including as frame for point cloud chart pass through the location information sequence of the corresponding space object Two adjacent location informations and the second collection period determine the corresponding object of the space object in the second collection period in column Second speed information, and then obtain the second speed information sequence of corresponding position information sequence.
In some embodiments, above-mentioned velocity information acquiring unit further include: calculate adjacent in second speed information sequence The corresponding second speed of second speed information it is poor, if second speed difference be greater than the second setting speed threshold value, by second speed Corresponding two point cloud charts of difference are as frame is as characteristic image frame.
In some embodiments, above-mentioned apparatus further includes optimization unit, for being optimized to perception algorithm, above-mentioned optimization Unit includes: that target information obtains subelement, for obtaining the corresponding markup information of each characteristic image frame and scene type letter Breath, above-mentioned scenetype information include blocking scene type, erroneous detection scene type, missing inspection scene type;Optimize subelement, is used for It, will scene type corresponding with characteristic image frame by the input of characteristic image frame perceptually algorithm using machine learning method The output of information and markup information perceptually algorithm, training optimized after perception algorithm
The third aspect, the embodiment of the present application provide a kind of server, comprising: one or more processors;Memory is used In storing one or more programs, when said one or multiple programs are executed by said one or multiple processors, so that on State the method for obtaining information that one or more processors execute above-mentioned first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence, which is characterized in that the program realizes the method for obtaining information of above-mentioned first aspect when being executed by processor.
It is provided by the embodiments of the present application for obtaining the method and device of information, extracted from image frame sequence to be processed to Picture frame is handled, location information of each subject image in picture frame to be processed in picture frame to be processed is then determined, obtains To location information sequence of the subject image in image frame sequence to be processed, object figure is calculated finally by location information sequence As the velocity information of corresponding object, evaluation and excellent can be carried out to perception algorithm by obtained location information and velocity information Change, improves the accuracy of perception algorithm mark.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the method for obtaining information of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for obtaining information of the application;
Fig. 4 is the structural schematic diagram according to one embodiment of the device for obtaining information of the application;
Fig. 5 is adapted for the structural schematic diagram for the computer system for realizing the server of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method for obtaining information of the application or the implementation of the device for obtaining information The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101,102, network 103 and server 104.Network 103 between terminal device 101,102 and server 104 to provide the medium of communication link.Network 103 may include various Connection type, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102 and be interacted by network 103 with server 104, be disappeared with receiving or sending Breath etc..Terminal device 101,102 can be equipped with various data handling utilities, such as Image Acquisition application, point cloud data acquisition Using, video acquisition application etc..
Terminal device 101,102 can be with Image Acquisition and support data transmit various electronic equipments, including but It is not limited to IP Camera, monitoring camera, terminal device camera, vehicle-mounted point cloud data acquisition equipment etc..
Server 104 can be to provide the server of various services, such as by the perception algorithm on server 104 to end The image data that end equipment 101,102 is sent carries out data processing, and from the image frame sequence to be processed that perception algorithm exports Obtain the server of information.Server can to perception algorithm export image frame sequence to be processed carry out data processing, to The location information and speed letter of the corresponding object of subject image that picture frame to be processed includes are got in processing image frame sequence The data such as breath.
It should be noted that the method provided by the embodiment of the present application for obtaining information is generally held by server 104 Row, correspondingly, the device for obtaining information is generally positioned in server 104.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process of one embodiment of the method for obtaining information according to the application is shown 200.This be used for obtain information method the following steps are included:
Step 201, image frame sequence to be processed is obtained.
In the present embodiment, the method for obtaining information runs electronic equipment (such as service shown in FIG. 1 thereon Device 104) image frame sequence to be processed that perception algorithm exports can be obtained by wired connection mode or radio connection. Wherein, above-mentioned image frame sequence to be processed includes the multiple picture frames to be processed being sequentially arranged, picture frame packet to be processed Containing the markup information being labeled through perception algorithm to the subject image in picture frame to be processed, above-mentioned markup information is used to indicate The position of subject image.It should be pointed out that above-mentioned radio connection can include but is not limited to 3G/4G connection, WiFi connects Connect, bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection and other it is currently known or will Come the radio connection developed.It should be noted that perception algorithm can store in server 104, also can store In other equipment.
Terminal device 101,102 can acquire image, can also acquire video.Video, which may be considered, passes through image sequence It constitutes.Under normal conditions, perception algorithm is greater than the difficulty to independent image labeling to the difficulty of video labeling, for this purpose, this reality Example is applied to be illustrated to video labeling.
As shown in Figure 1, terminal device 101 can be the camera on stationary object.Such as: terminal device 101 can To be arranged on the body of rod at crossing, for acquiring the vehicle at crossing and the image sequence of pedestrian.That is, terminal device 101 collects Image sequence it is also assumed that being from resting position acquired image.Terminal device 102 can be in moving object Point cloud data acquire equipment.Such as: the roof in vehicle can be set in terminal device 102, is expert at for acquiring vehicle and crosses The image sequence of other vehicles in journey.It is acquired that is, 102 acquired image sequence of terminal device may be considered from movement position The image arrived.
Terminal device 101,102 acquired image sequences can be sent in the equipment comprising perception algorithm, and perception is calculated Method can be labeled information specific in image sequence.Later, the available perception algorithm of server 104 output wait locate Manage image frame sequence.Perception algorithm is for being labeled the subject image that image includes, therefore, in image frame sequence to be processed Each of picture frame to be processed also include corresponding subject image markup information.In general, markup information is used to indicate subject image Position in picture frame to be processed.For example, markup information can be the box for being used to indicate jobbie image, the object figure As may be embodied in box.Optionally, markup information can also include being used to indicate color, the symbol of subject image, or use It is specific depending on actual needs in information such as the text informations that subject image is described, it no longer repeats one by one herein.
Step 202, picture frame to be processed is extracted from above-mentioned image frame sequence to be processed, includes for picture frame to be processed Each subject image, determine the subject image in the picture frame to be processed by the markup information that picture frame to be processed includes On location information, and then the location information sequence for the correspondence image frame sequence to be processed for obtaining the subject image.
For the information that different image sequence and needs mark, perception algorithm can be different.For same image sequence, It can also be labeled using different perception algorithms.In general, perception algorithm needs the image for learning manually to mark, carry out To the mask method of mark correspondence image content.In practice, due to the notation methods of perception algorithm itself, label parameters, and Be marked the clarity of image sequence, brightness, objects in images the reasons such as large number of, the markup information that perception algorithm obtains Often there is precision not as good as the case where artificial mark, and the location information for the subject image that precision can be indicated from markup information To embody.
For this purpose, the server 104 of the application can extract picture frame to be processed from image frame sequence to be processed.It is to be processed Picture frame can wrap containing multiple objects image, for each subject image that picture frame to be processed includes, by with the object The corresponding markup information of body image determines position of the subject image on picture frame to be processed.In general, image frame sequence to be processed Picture frame to be processed in column periodically acquires, accordingly, it is determined that same subject image is in image frame sequence to be processed Location information on each picture frame to be processed, so that it may obtain the position of the correspondence image frame sequence to be processed of the subject image Information sequence.Later, location information and location information sequence pair of the subject image on the picture frame to be processed can be passed through The accuracy of the mark of perception algorithm is judged.
Divide according to data format, picture frame to be processed can be divided into be made of pixel pixel image frame (it is understood that For two dimensional image) and the point cloud chart that is made of point cloud data as frame (can be understood as 3-D image), individually below to from this two The case where location information of subject image is determined in class image is analyzed.
In some optional implementations of the present embodiment, above-mentioned picture frame to be processed may include pixel image frame and The first time of respective pixel picture frame stabs, and, the object is determined above by the markup information that picture frame to be processed includes Location information of the image on the picture frame to be processed may comprise steps of:
The first step establishes plane right-angle coordinate, and pixel image frame is arranged in the setting of above-mentioned plane right-angle coordinate Region.
When picture frame to be processed is pixel image frame, terminal device 101,102 can be with when obtaining pixel image frame The first time stamp of the pixel image frame is acquired simultaneously.When stabbing the acquisition for recording respective pixel picture frame at the first time Between.
Seen from the above description, 101 acquired image sequence of terminal device is it is also assumed that acquired from resting position The image sequence arrived, the image sequence can recorde position of the mobile object in each image in image sequence.Due to Terminal device 101 is that image is acquired under stationary state, and therefore, the image that terminal device 101 can be acquired is as same distance Scale gets off the position of determining object in the picture.
In order to determine the position of subject image that pixel image frame includes, the present embodiment can establish plane rectangular coordinates The setting regions that plane right-angle coordinate is arranged in pixel image frame (such as can be the first of plane right-angle coordinate by system Quadrant), in this way, can determine the location information of all pixel image frames according to same distance scale.Above-mentioned flat square The abscissa and ordinate of coordinate system can indicate distance.
The first mark point is arranged for each subject image that pixel image frame includes in second step in subject image.
Object usually has certain shape and volume, in order to accurately determine the subject image of object in pixel image frame Location information, can determine the first setting position on object, object is characterized with the first setting position.It is corresponding, it can be with First mark point of corresponding first setting position is set in subject image.That is, above-mentioned first mark point is for label and object Position of first setting position in pixel image frame on the corresponding object of image.In this way, can by the first mark point come Determine position letter of the corresponding subject image in each pixel image frame (picture frame i.e. to be processed) of image frame sequence to be processed Breath.It should be noted that the quantity of the first mark point, which can be one or more, (when to be multiple, can pass through multiple first The direction change of reference points detection object), it is specific depending on actual needs.
Third step, using coordinate value of first mark point on above-mentioned plane right-angle coordinate as subject image in the pixel Position value on picture frame, the first time of the pixel image frame stab the corresponding time as corresponding with above-mentioned position value Time value, the location information by above-mentioned position value and time valued combinations at subject image on the picture frame to be processed.
Seen from the above description, each pixel image frame that image frame sequence to be processed includes be all disposed within it is identical away from From under scale, therefore, the value of coordinate value of first mark point on plane right-angle coordinate may be considered subject image and exist Position value on the pixel image frame.Then, time when terminal device 101 to be obtained to pixel image frame, (time was the The one timestamp corresponding time) it is used as the corresponding time value of the position value.By position value and time valued combinations at object Location information of the body image on the picture frame to be processed.That is, location information includes two parts: first part is position value, Second part is time value.Wherein, position value includes abscissa value of the mark point on plane right-angle coordinate and indulges again Coordinate value.
In some optional implementations of the present embodiment, above-mentioned picture frame to be processed may include point cloud chart as frame and Corresponding point cloud chart is as the second timestamp of frame, and point cloud chart includes point cloud data as frame, and point cloud data is for passing through three-dimensional coordinate point Space object is described, and, determine that the subject image is to be processed at this above by the markup information that picture frame to be processed includes Location information on picture frame may comprise steps of:
The first step, by point cloud chart as the point cloud data that frame includes constructs virtual three-dimensional space.
Similar with above-mentioned pixel image frame, 102 acquired image sequence of terminal device is usually acquired with point cloud data The time that equipment rotates a circle is as collection period, and point cloud data includes three-dimensional coordinate.Therefore, it can establish three-dimensional coordinate System gets off 102 acquired image of terminal device the position of determining object in the picture as same distance scale.
Point cloud chart is made of as frame point cloud data, and point cloud data generally comprises the three-dimensional coordinate in space.Therefore, it can be set Three-dimensional space origin, and select according to the three-dimensional coordinate of point cloud data the x-axis, y-axis and z-axis of three-dimensional space;Later, in three-dimensional On the basis of space origins, x-axis, y-axis and z-axis, by point cloud chart as the point cloud data that frame includes constructs virtual three-dimensional space.
The point cloud data for the space object corresponding with subject image that markup information is directed toward is determined as marking point by second step Cloud data.
Point cloud data can accurately determine the position of the corresponding space object of subject image in space.Therefore, work as mark When infusing the information corresponding space object of instruction, the space object can be accurately determined by relevant point cloud data virtual three Position in dimension space.It is corresponding, the point cloud data for the space object corresponding with subject image that markup information can be directed toward It is determined as marking point cloud data, that is, space object is made of mark point cloud data, and mark point cloud data is determined for space Object.
The second mark point is arranged on the space object indicated by above-mentioned mark point cloud data in third step, by the second label The corresponding point cloud data of point is as space object in point cloud chart as the location information on frame.
It is similar with the first above-mentioned mark point, the second mark point, above-mentioned second mark point can be set on space object It can be used for the second setting position on label space object in point cloud chart as the position in frame.With the corresponding point of the second mark point Cloud data as space object point cloud chart as frame (it is also assumed that be in virtual three-dimensional space point cloud chart as frame acquisition when Carve) on location information, so as to space object in each point cloud chart as the position of frame measures.The quantity of second mark point It is also possible to one or more (when to be multiple, can detecte the direction change of space object by multiple second mark points), Specifically depending on actual conditions.
Step 203, the velocity information of object corresponding with subject image is calculated by location information sequence.
Location information sequence can embody the motion state of the corresponding object of subject image in picture frame to be processed.In order to true Motion conditions of the earnest body in picture frame to be processed in collection period can also calculate speed letter of the object in collection period Breath.Individually below by pixel image frame and point cloud chart as process of the frame to the velocity information for obtaining object is described.
In some optional implementations of the present embodiment, calculated and subject image pair above by location information sequence The velocity information for the object answered may comprise steps of:
The first step stabs the first collection period for determining pixel image frame by the first time between pixel image frame.
Pixel image frame can have the first time stamp of record pixel image frame acquisition time.Under normal conditions, pixel The collection period of picture frame is fixed, and therefore, can pass through the time of the adjacent respective first time stamp of two pixel image frames Difference determines the first collection period of pixel image frame.
Second step believes each subject image that pixel image frame includes by the position of the corresponding subject image Adjacent two location informations and the first collection period determine the corresponding object of the subject image in the first acquisition week in breath sequence First Speed information in phase, and then obtain the First Speed information sequence of corresponding position information sequence.
Each pixel image frame can wrap containing multiple objects image, can be determined arbitrarily by adjacent pixel image frame First Speed information of the corresponding object of one subject image in the first collection period.That is, First Speed information is adjacent Euclidean distance in two location informations between the value of position is divided by the first collection period.By the object in whole First Speed Information is arranged successively the First Speed information sequence of the available object.
In some optional implementations of the present embodiment, calculated and subject image pair above by location information sequence The velocity information for the object answered can also include: to calculate adjacent First Speed information in First Speed information sequence corresponding the One speed difference, if First Speed difference is greater than the first setting speed threshold value, by the corresponding two pixel image frames of First Speed difference As characteristic image frame.
Whether the speed that speed difference is able to reflect mobile object there is the case where mutation, and in practice, speed is usually not It can mutate.Therefore, it when detecting that speed has occurred, is then most likely due to perception algorithm and receives the reasons such as interference, Occurs marking error in annotation process.For example, picture frame to be processed has recorded multiple while movement object, due to object Between the reasons such as position, speed and mutual distance so that the marking error that perception algorithm occurs.
In some optional implementations of the present embodiment, calculated and subject image pair above by location information sequence The velocity information for the object answered may comprise steps of:
The first step, by point cloud chart as the second timestamp between frame determines second collection period of the point cloud chart as frame.
It is similar with pixel image frame, can also have fixed collection period as frame with point cloud chart.Adjacent two can be passed through A point cloud chart determines second collection period of the point cloud chart as frame as the time difference of respective second timestamp of frame.
Second step believes each space object that point cloud chart includes as frame by the position of the corresponding space object Adjacent two location informations and the second collection period determine the corresponding object of the space object in the second acquisition week in breath sequence Second speed information in phase, and then obtain the second speed information sequence of corresponding position information sequence.
The process for obtaining second speed information sequence is similar with the process of First Speed information sequence is obtained, not another herein One repeats.
In some optional implementations of the present embodiment, the above method can also include optimizing to perception algorithm The step of, above-mentioned the step of optimizing to perception algorithm, may comprise steps of:
The first step obtains the corresponding markup information of each characteristic image frame and scenetype information.
Above-mentioned characteristic image frame is usually that perception algorithm marks wrong picture frame, and this kind of picture frame reflects perception and calculates Method in certain calculating parameters, identification parameter, judge that the value of the parameters such as parameter is wrong, or calculate, identification, judgement method on deposit In deficiency.After getting these characteristic image frames, mark letter can be added to these characteristic image frames by artificial method Breath and scenetype information, so that perception algorithm study manually marks the process of these characteristic image frames, and finally can be right These picture frames for being easy to appear marking error are accurately marked.Wherein, above-mentioned scenetype information may include blocking field Scape type, erroneous detection scene type, missing inspection scene type, according to different judgment criteria or data type, scenetype information is also It can be weather scene type, traffic congestion scene type etc., no longer repeat one by one herein.
Second step, will be with characteristic image frame by the input of characteristic image frame perceptually algorithm using machine learning method The output of corresponding scenetype information and markup information perceptually algorithm, training optimized after perception algorithm.
The electronic equipment of the present embodiment can use machine learning method, by the defeated of characteristic image frame perceptually algorithm Enter, by the output of scenetype information corresponding with characteristic image frame and markup information perceptually algorithm, training is optimized Perception algorithm afterwards.Specifically, the intelligence such as deep learning algorithm, Recognition with Recurrent Neural Network can be used in the electronic equipment of the present embodiment Algorithm by the input of characteristic image frame perceptually algorithm, and records perception algorithm to the initial markup information of characteristic image frame; Then initial markup information and artificial markup information are compared to obtain mark differential information;Finally by mark difference letter Breath realizes the adjustment to perception algorithm inherent parameters, so that perception algorithm reaches the initial markup information that characteristic image frame marks The accuracy of artificial markup information, so that obtaining can be to the perception algorithm after the optimization that characteristic image frame is accurately marked.
Come to carry out data to the image frame sequence to be processed that perception algorithm exports above by location information and velocity information Processing, and the characteristic image frame by getting judges come the standard accuracy to perception algorithm;Later, pass through characteristic pattern As frame optimizes perception algorithm, the perception algorithm after optimization is accurately marked to characteristic image frame.For Different technical problems, perception algorithm can also be different.Different perception algorithms can by location information with the present embodiment, The similar other parameters of velocity information carry out the output result to perception algorithm and carry out data processing, obtain corresponding characteristic, And judged by mark accuracy of the characteristic to perception algorithm.And then by characteristic to perception algorithm into Row optimization.That is, the present embodiment has generality to the evaluation of perception algorithm and optimization process, for different perception algorithms, It only needs to get corresponding data, equally different perception algorithms can be evaluated and be optimized.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for obtaining information of the present embodiment Figure.In the application scenarios of Fig. 3, point cloud data acquires equipment 102 (i.e. terminal device 102) setting in vehicle roof, puts cloud number The point cloud data image sequence of the object detected in vehicle travel process is acquired according to acquisition equipment 102;The point cloud data of acquisition Server 104 is sent to by network 103 after perceived algorithm mark;Server 104 is to be processed after perception algorithm mark Image frame sequence is handled, and location information and velocity information are obtained;Later, perception algorithm is optimized, so that perception is calculated Method is improved to the accuracy for corresponding to object mark in point cloud data image sequence.
The method provided by the above embodiment of the application extracts picture frame to be processed from image frame sequence to be processed, then Determine location information of each subject image in picture frame to be processed in picture frame to be processed, obtain the subject image to The location information sequence in image frame sequence is handled, the speed of the corresponding object of subject image is calculated finally by location information sequence Information is spent, perception algorithm can be evaluated and be optimized by obtained location information and velocity information, perception is improved and calculates The accuracy of method mark.
With further reference to Fig. 4, as the realization to method shown in above-mentioned each figure, this application provides one kind for obtaining letter One embodiment of the device of breath, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer For in various electronic equipments.
As shown in figure 4, the present embodiment may include: data capture unit 401, position for obtaining the device 400 of information Set information acquisition unit 402 and velocity information acquiring unit 403.Wherein, data capture unit 401 is for obtaining image to be processed Frame sequence, above-mentioned image frame sequence to be processed include multiple picture frames to be processed, and picture frame to be processed includes through perception algorithm pair The markup information that subject image in picture frame to be processed is labeled, above-mentioned markup information are used to indicate the position of subject image It sets;Location information acquiring unit 402 is for extracting picture frame to be processed from above-mentioned image frame sequence to be processed, for be processed Each subject image that picture frame includes determines that the subject image is waited at this by the markup information that picture frame to be processed includes Handle the location information on picture frame, and then the location information sequence for the correspondence image frame sequence to be processed for obtaining the subject image Column;Velocity information acquiring unit 403 is used to calculate the velocity information of object corresponding with subject image by location information sequence.
In some optional implementations of the present embodiment, above-mentioned picture frame to be processed includes pixel image frame and correspondence The first time of pixel image frame stabs, and, above-mentioned location information acquiring unit 402 may include: pixel image frame setting Unit (not shown), the first mark point setting subelement (not shown) and first location information obtain subelement (figure In be not shown).Wherein, pixel image frame setting subelement exists the setting of pixel image frame for establishing plane right-angle coordinate The setting regions of above-mentioned plane right-angle coordinate, the abscissa and ordinate of above-mentioned plane right-angle coordinate indicate distance;First Mark point setting subelement is used for each subject image for including for pixel image frame, and the first mark is arranged in subject image Remember point, above-mentioned first mark point is for marking the first setting position on the corresponding object of subject image in pixel image frame Position;First location information obtain subelement be used for using coordinate value of first mark point on above-mentioned plane right-angle coordinate as Position value of the subject image on the pixel image frame, first time of the pixel image frame stab the corresponding time as with it is upper Rheme sets the corresponding time value of value, by above-mentioned position value and time valued combinations at subject image in the image to be processed Location information on frame.
In some optional implementations of the present embodiment, above-mentioned velocity information acquiring unit 403 may include: first Collection period obtains subelement (not shown) and First Speed information sequence obtains subelement (not shown).Wherein, First collection period obtains subelement and is used to stab the first of determining pixel image frame by the first time between pixel image frame Collection period;First Speed information sequence obtains subelement and is used for each subject image for including for pixel image frame, leads to It crosses two location informations and the first collection period adjacent in the location information sequence for correspond to the subject image and determines the object figure As First Speed information of the corresponding object in the first collection period, and then obtain the First Speed of corresponding position information sequence Information sequence.
In some optional implementations of the present embodiment, above-mentioned velocity information acquiring unit 403 can also include: meter It is poor to calculate the corresponding First Speed of First Speed information adjacent in First Speed information sequence, is set if First Speed difference is greater than first Determine threshold speed, then using the corresponding two pixel image frames of First Speed difference as characteristic image frame.
In some optional implementations of the present embodiment, above-mentioned picture frame to be processed may include point cloud chart as frame and Corresponding point cloud chart is as the second timestamp of frame, and point cloud chart includes point cloud data as frame, and point cloud data is for passing through three-dimensional coordinate point Space object is described, and, above-mentioned location information acquiring unit 402 may include: virtual three-dimensional space building subelement (in figure Be not shown), mark point cloud data determine that subelement (not shown) and second location information obtain subelement and (do not show in figure Out).Wherein, virtual three-dimensional space building subelement is used for through point cloud chart as the point cloud data that frame includes constructs virtual three-dimensional sky Between;Mark point cloud data determines the point cloud number of with subject image corresponding space object of the subelement for markup information to be directed toward According to be determined as mark point cloud data;Second location information obtains subelement and is used for the space indicated by above-mentioned mark point cloud data Second mark point is set on object, using the corresponding point cloud data of the second mark point as space object in point cloud chart as the position on frame Confidence breath, above-mentioned second mark point is for the second setting position on label space object in point cloud chart as the position in frame.
In some optional implementations of the present embodiment, above-mentioned velocity information acquiring unit 403 may include: second Collection period obtains subelement (not shown) and second speed information sequence obtains subelement (not shown).Wherein, Second collection period obtains subelement and is used for through point cloud chart as the second timestamp between frame determines point cloud chart as the second of frame Collection period;Second speed information sequence obtains subelement and is used for each space object for including as frame for point cloud chart, leads to It crosses two location informations and the second collection period adjacent in the location information sequence for correspond to the space object and determines the space object Second speed information of the corresponding object of body in the second collection period, and then obtain the second speed of corresponding position information sequence Information sequence.
In some optional implementations of the present embodiment, above-mentioned velocity information acquiring unit 403 can also include: meter It is poor to calculate the corresponding second speed of second speed information adjacent in second speed information sequence, is set if second speed difference is greater than second Determine threshold speed, then using corresponding two point cloud charts of second speed difference as frame is as characteristic image frame.
In some optional implementations of the present embodiment, the above-mentioned device 400 for obtaining information can also include Optimize unit (not shown), for optimizing to perception algorithm, above-mentioned optimization unit may include: that target information obtains Subelement (not shown) and optimization subelement (not shown).Wherein, it is every for obtaining to obtain subelement for target information The corresponding markup information of a characteristic image frame and scenetype information, above-mentioned scenetype information include blocking scene type, accidentally Examine scene type, missing inspection scene type;Optimize subelement to be used to utilize machine learning method, characteristic image frame is perceptually calculated The input of method, by the output of scenetype information corresponding with characteristic image frame and markup information perceptually algorithm, trained Perception algorithm after to optimization.
The present embodiment additionally provides a kind of server, comprising: one or more processors;Memory, for storing one Or multiple programs, when said one or multiple programs are executed by said one or multiple processors, so that said one or more A processor executes the above-mentioned method for obtaining information.
The present embodiment additionally provides a kind of computer readable storage medium, is stored thereon with computer program, the program quilt Processor realizes the above-mentioned method for obtaining information when executing.
Below with reference to Fig. 5, it illustrates the computer systems 500 for the server for being suitable for being used to realize the embodiment of the present application Structural schematic diagram.Server shown in Fig. 5 is only an example, should not function and use scope band to the embodiment of the present application Carry out any restrictions.
As shown in figure 5, computer system 500 includes central processing unit (CPU) 501, it can be read-only according to being stored in Program in memory (ROM) 502 or be loaded into the program in random access storage device (RAM) 503 from storage section 508 and Execute various movements appropriate and processing.In RAM 503, also it is stored with system 500 and operates required various programs and data. CPU 501, ROM 502 and RAM 503 are connected with each other by bus 504.Input/output (I/O) interface 505 is also connected to always Line 504.
I/O interface 505 is connected to lower component: the importation 506 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 507 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 508 including hard disk etc.; And the communications portion 509 of the network interface card including LAN card, modem etc..Communications portion 509 via such as because The network of spy's net executes communication process.Driver 510 is also connected to I/O interface 505 as needed.Detachable media 511, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 510, in order to read from thereon Computer program be mounted into storage section 508 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communications portion 509, and/or from detachable media 511 are mounted.When the computer program is executed by central processing unit (CPU) 501, limited in execution the present processes Above-mentioned function.
It should be noted that the above-mentioned computer-readable medium of the application can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In this application, computer readable storage medium can be it is any include or storage journey The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this In application, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. are above-mentioned Any appropriate combination.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet Include data capture unit, location information acquiring unit and velocity information acquiring unit.Wherein, the title of these units is in certain feelings The restriction to the unit itself is not constituted under condition, for example, velocity information acquiring unit is also described as " for obtaining speed Spend the unit of information ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should Device: obtaining image frame sequence to be processed, and above-mentioned image frame sequence to be processed includes multiple picture frames to be processed, image to be processed Frame includes the markup information being labeled through perception algorithm to the subject image in picture frame to be processed, and above-mentioned markup information is used for The position of indicator body image;Picture frame to be processed is extracted from above-mentioned image frame sequence to be processed, for picture frame to be processed Each subject image for including determines the subject image in the figure to be processed by the markup information that picture frame to be processed includes As the location information on frame, and then the location information sequence for the correspondence image frame sequence to be processed for obtaining the subject image;Pass through Location information sequence calculates the velocity information of object corresponding with subject image.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (18)

1. a kind of method for obtaining information, which is characterized in that the described method includes:
Image frame sequence to be processed is obtained, the image frame sequence to be processed includes multiple picture frames to be processed, image to be processed Frame includes the markup information being labeled through perception algorithm to the subject image in picture frame to be processed, and the markup information is used for The position of indicator body image;
Picture frame to be processed is extracted from the image frame sequence to be processed, each object for including for picture frame to be processed Image determines that position of the subject image on the picture frame to be processed is believed by the markup information that picture frame to be processed includes Breath, and then the location information sequence for the correspondence image frame sequence to be processed for obtaining the subject image;
The velocity information of object corresponding with subject image is calculated by location information sequence.
2. the method according to claim 1, wherein the picture frame to be processed includes pixel image frame and correspondence The first time of pixel image frame stabs, and
The markup information for including by picture frame to be processed determines position of the subject image on the picture frame to be processed Information includes:
Plane right-angle coordinate is established, pixel image frame is arranged in the setting regions of the plane right-angle coordinate, it is described flat The abscissa and ordinate of face rectangular coordinate system indicate distance;
For each subject image that pixel image frame includes, the first mark point, first mark are set in subject image Note point is for marking position of first setting position in pixel image frame on object corresponding with subject image;
Using the first mark point the coordinate value on the plane right-angle coordinate as subject image on the pixel image frame Position value, first time of the pixel image frame stab the corresponding time as time value corresponding with the position value, Location information by the position value and time valued combinations at subject image on the picture frame to be processed.
3. according to the method described in claim 2, it is characterized in that, described calculated and subject image pair by location information sequence The velocity information for the object answered includes:
The first collection period for determining pixel image frame is stabbed by the first time between pixel image frame;
It is adjacent in the location information sequence by the corresponding subject image for each subject image that pixel image frame includes Two location informations and the first collection period determine the corresponding object of the subject image in the first collection period first speed Information is spent, and then obtains the First Speed information sequence of corresponding position information sequence.
4. according to the method described in claim 3, it is characterized in that, described calculated and subject image pair by location information sequence The velocity information for the object answered further include:
It is poor to calculate the corresponding First Speed of First Speed information adjacent in First Speed information sequence, if First Speed difference is greater than First setting speed threshold value, then using the corresponding two pixel image frames of First Speed difference as characteristic image frame.
5. according to the method described in claim 4, it is characterized in that, the picture frame to be processed includes point cloud chart as frame and correspondence Point cloud chart is as the second timestamp of frame, and point cloud chart includes point cloud data as frame, and point cloud data is used to describe by three-dimensional coordinate point Space object, and
The markup information for including by picture frame to be processed determines position of the subject image on the picture frame to be processed Information includes:
By point cloud chart as the point cloud data that frame includes constructs virtual three-dimensional space;
The point cloud data for the space object corresponding with subject image that markup information is directed toward is determined as to mark point cloud data;
The second mark point is set on the space object indicated by the mark point cloud data, by corresponding cloud of the second mark point Data as space object in point cloud chart as the location information on frame, second mark point is for the on label space object Two setting positions are in point cloud chart as the position in frame.
6. according to the method described in claim 5, it is characterized in that, described calculated and subject image pair by location information sequence The velocity information for the object answered includes:
By point cloud chart as the second timestamp between frame determines second collection period of the point cloud chart as frame;
It is adjacent in the location information sequence by the corresponding space object for each space object that point cloud chart includes as frame Two location informations and the second collection period determine the corresponding object of the space object in the second collection period second speed Information is spent, and then obtains the second speed information sequence of corresponding position information sequence.
7. according to the method described in claim 6, it is characterized in that, described calculated and subject image pair by location information sequence The velocity information for the object answered further include:
It is poor to calculate the corresponding second speed of second speed information adjacent in second speed information sequence, if second speed difference is greater than Second setting speed threshold value, then using corresponding two point cloud charts of second speed difference as frame is as characteristic image frame.
8. the method according to the description of claim 7 is characterized in that the step that the method also includes optimizing to perception algorithm Suddenly, described the step of optimizing to perception algorithm, includes:
The corresponding markup information of each characteristic image frame and scenetype information are obtained, the scenetype information includes blocking field Scape type, erroneous detection scene type, missing inspection scene type;
It, will scene corresponding with characteristic image frame by the input of characteristic image frame perceptually algorithm using machine learning method The output of type information and markup information perceptually algorithm, training optimized after perception algorithm.
9. a kind of for obtaining the device of information, which is characterized in that described device includes:
Data capture unit, for obtaining image frame sequence to be processed, the image frame sequence to be processed includes multiple to be processed Picture frame, picture frame to be processed include to believe through perception algorithm the mark that the subject image in picture frame to be processed is labeled Breath, the markup information are used to indicate the position of subject image;
Location information acquiring unit, for extracting picture frame to be processed from the image frame sequence to be processed, for be processed Each subject image that picture frame includes determines that the subject image is waited at this by the markup information that picture frame to be processed includes Handle the location information on picture frame, and then the location information sequence for the correspondence image frame sequence to be processed for obtaining the subject image Column;
Velocity information acquiring unit, for calculating the velocity information of object corresponding with subject image by location information sequence.
10. device according to claim 9, which is characterized in that the picture frame to be processed includes pixel image frame and right The first time of pixel image frame is answered to stab, and
The location information acquiring unit includes:
Subelement is arranged in pixel image frame, for establishing plane right-angle coordinate, pixel image frame is arranged straight in the plane The setting regions of angular coordinate system, the abscissa and ordinate of the plane right-angle coordinate indicate distance;
Subelement, each subject image for including for pixel image frame, in subject image is arranged in first mark point First mark point is set, and first mark point is for marking the first setting position on object corresponding with subject image in picture Position in plain picture frame;
First location information obtains subelement, for using coordinate value of first mark point on the plane right-angle coordinate as Position value of the subject image on the pixel image frame, first time of the pixel image frame stab the corresponding time as with institute Rheme sets the corresponding time value of value, by the position value and time valued combinations at subject image in the image to be processed Location information on frame.
11. device according to claim 10, which is characterized in that the velocity information acquiring unit includes:
First collection period obtains subelement, determines pixel image frame for stabbing by the first time between pixel image frame First collection period;
First Speed information sequence obtains subelement, each subject image for including for pixel image frame, by right Two location informations and the first collection period that should be adjacent in the location information sequence of subject image determine the subject image pair First Speed information of the object answered in the first collection period, and then obtain the First Speed information of corresponding position information sequence Sequence.
12. device according to claim 11, which is characterized in that the velocity information acquiring unit further include:
It is poor to calculate the corresponding First Speed of First Speed information adjacent in First Speed information sequence, if First Speed difference is greater than First setting speed threshold value, then using the corresponding two pixel image frames of First Speed difference as characteristic image frame.
13. device according to claim 12, which is characterized in that the picture frame to be processed includes point cloud chart as frame and right Second timestamp of the point cloud chart as frame is answered, point cloud chart includes point cloud data as frame, and point cloud data by three-dimensional coordinate point for retouching Space object is stated, and
The location information acquiring unit includes:
Virtual three-dimensional space constructs subelement, the point cloud data building virtual three-dimensional space for including as frame by point cloud chart;
Mark point cloud data determines subelement, the point cloud of the space object corresponding with subject image for markup information to be directed toward Data are determined as marking point cloud data;
Second location information obtains subelement, and the second mark is arranged on the space object indicated by the mark point cloud data Remember point, using the corresponding point cloud data of the second mark point as space object in point cloud chart as the location information on frame, described second Mark point is for the second setting position on label space object in point cloud chart as the position in frame.
14. device according to claim 13, which is characterized in that the velocity information acquiring unit includes:
Second collection period obtain subelement, for by point cloud chart as the second timestamp between frame determines point cloud chart as frame Second collection period;
Second speed information sequence obtains subelement, each space object for including as frame for point cloud chart, by right Two location informations and the second collection period that should be adjacent in the location information sequence of space object determine the space object pair Second speed information of the object answered in the second collection period, and then obtain the second speed information of corresponding position information sequence Sequence.
15. device according to claim 14, which is characterized in that the velocity information acquiring unit further include:
It is poor to calculate the corresponding second speed of second speed information adjacent in second speed information sequence, if second speed difference is greater than Second setting speed threshold value, then using corresponding two point cloud charts of second speed difference as frame is as characteristic image frame.
16. device according to claim 15, which is characterized in that described device further includes optimization unit, for perception Algorithm optimizes, and the optimization unit includes:
Target information obtains subelement, for obtaining the corresponding markup information of each characteristic image frame and scenetype information, institute Stating scenetype information includes blocking scene type, erroneous detection scene type, missing inspection scene type;
Optimize subelement, it, will be with characteristic pattern by the input of characteristic image frame perceptually algorithm for utilizing machine learning method As the output of the corresponding scenetype information of frame and markup information perceptually algorithm, training optimized after perception algorithm.
17. a kind of server, comprising:
One or more processors;
Memory, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors Perform claim requires any method in 1 to 8.
18. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The method as described in any in claim 1 to 8 is realized when execution.
CN201711297504.4A 2017-12-08 2017-12-08 Method and device for acquiring information Active CN109903308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711297504.4A CN109903308B (en) 2017-12-08 2017-12-08 Method and device for acquiring information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711297504.4A CN109903308B (en) 2017-12-08 2017-12-08 Method and device for acquiring information

Publications (2)

Publication Number Publication Date
CN109903308A true CN109903308A (en) 2019-06-18
CN109903308B CN109903308B (en) 2021-02-26

Family

ID=66940649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711297504.4A Active CN109903308B (en) 2017-12-08 2017-12-08 Method and device for acquiring information

Country Status (1)

Country Link
CN (1) CN109903308B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131414A (en) * 2020-09-23 2020-12-25 北京百度网讯科技有限公司 Signal lamp image labeling method and device, electronic equipment and road side equipment
CN112668363A (en) * 2019-10-15 2021-04-16 北京地平线机器人技术研发有限公司 Alarm accuracy determination method, device and computer readable storage medium
CN113379591A (en) * 2021-06-21 2021-09-10 中国科学技术大学 Speed determination method, speed determination device, electronic device, and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6628804B1 (en) * 1999-02-19 2003-09-30 Fujitsu Limited Method and apparatus for measuring speed of vehicle
CN101742305A (en) * 2009-12-09 2010-06-16 哈尔滨商业大学 Method for estimating motions of scalable video coding and decoding system based on Markov chain model
CN103473757A (en) * 2012-06-08 2013-12-25 株式会社理光 Object tracking method in disparity map and system thereof
CN104301585A (en) * 2014-09-24 2015-01-21 南京邮电大学 Method for detecting specific kind objective in movement scene in real time
CN104599290A (en) * 2015-01-19 2015-05-06 苏州经贸职业技术学院 Video sensing node-oriented target detection method
CN105678322A (en) * 2015-12-31 2016-06-15 百度在线网络技术(北京)有限公司 Sample labeling method and apparatus
CN105809654A (en) * 2014-12-29 2016-07-27 深圳超多维光电子有限公司 Target object tracking method and device, and stereo display equipment and method
CN105975929A (en) * 2016-05-04 2016-09-28 北京大学深圳研究生院 Fast pedestrian detection method based on aggregated channel features
CN105989593A (en) * 2015-02-12 2016-10-05 杭州海康威视系统技术有限公司 Method and device for measuring speed of specific vehicle in video record
CN106127802A (en) * 2016-06-16 2016-11-16 南京邮电大学盐城大数据研究院有限公司 A kind of movement objective orbit method for tracing
CN106707293A (en) * 2016-12-01 2017-05-24 百度在线网络技术(北京)有限公司 Obstacle recognition method and device for vehicles
CN106780601A (en) * 2016-12-01 2017-05-31 北京未动科技有限公司 A kind of locus method for tracing, device and smart machine
CN106803262A (en) * 2016-12-21 2017-06-06 上海交通大学 The method that car speed is independently resolved using binocular vision
CN106846374A (en) * 2016-12-21 2017-06-13 大连海事大学 The track calculating method of vehicle under multi-cam scene
CN107430821A (en) * 2015-04-02 2017-12-01 株式会社电装 Image processing apparatus

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6628804B1 (en) * 1999-02-19 2003-09-30 Fujitsu Limited Method and apparatus for measuring speed of vehicle
CN101742305A (en) * 2009-12-09 2010-06-16 哈尔滨商业大学 Method for estimating motions of scalable video coding and decoding system based on Markov chain model
CN103473757A (en) * 2012-06-08 2013-12-25 株式会社理光 Object tracking method in disparity map and system thereof
CN104301585A (en) * 2014-09-24 2015-01-21 南京邮电大学 Method for detecting specific kind objective in movement scene in real time
CN105809654A (en) * 2014-12-29 2016-07-27 深圳超多维光电子有限公司 Target object tracking method and device, and stereo display equipment and method
CN104599290A (en) * 2015-01-19 2015-05-06 苏州经贸职业技术学院 Video sensing node-oriented target detection method
CN105989593A (en) * 2015-02-12 2016-10-05 杭州海康威视系统技术有限公司 Method and device for measuring speed of specific vehicle in video record
CN107430821A (en) * 2015-04-02 2017-12-01 株式会社电装 Image processing apparatus
CN105678322A (en) * 2015-12-31 2016-06-15 百度在线网络技术(北京)有限公司 Sample labeling method and apparatus
CN105975929A (en) * 2016-05-04 2016-09-28 北京大学深圳研究生院 Fast pedestrian detection method based on aggregated channel features
CN106127802A (en) * 2016-06-16 2016-11-16 南京邮电大学盐城大数据研究院有限公司 A kind of movement objective orbit method for tracing
CN106707293A (en) * 2016-12-01 2017-05-24 百度在线网络技术(北京)有限公司 Obstacle recognition method and device for vehicles
CN106780601A (en) * 2016-12-01 2017-05-31 北京未动科技有限公司 A kind of locus method for tracing, device and smart machine
CN106803262A (en) * 2016-12-21 2017-06-06 上海交通大学 The method that car speed is independently resolved using binocular vision
CN106846374A (en) * 2016-12-21 2017-06-13 大连海事大学 The track calculating method of vehicle under multi-cam scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邱泽宇,方全,桑基韬,徐常胜: "基于区域上下文感知的图像标注", 《计算机学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668363A (en) * 2019-10-15 2021-04-16 北京地平线机器人技术研发有限公司 Alarm accuracy determination method, device and computer readable storage medium
CN112131414A (en) * 2020-09-23 2020-12-25 北京百度网讯科技有限公司 Signal lamp image labeling method and device, electronic equipment and road side equipment
CN113379591A (en) * 2021-06-21 2021-09-10 中国科学技术大学 Speed determination method, speed determination device, electronic device, and storage medium
CN113379591B (en) * 2021-06-21 2024-02-27 中国科学技术大学 Speed determination method, speed determination device, electronic device and storage medium

Also Published As

Publication number Publication date
CN109903308B (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN108154196B (en) Method and apparatus for exporting image
CN108694882A (en) Method, apparatus and equipment for marking map
US20180276241A1 (en) System and method for telecom inventory management
CN110427917A (en) Method and apparatus for detecting key point
CN109508681A (en) The method and apparatus for generating human body critical point detection model
CN110400363A (en) Map constructing method and device based on laser point cloud
CN109753928A (en) The recognition methods of architecture against regulations object and device
CN108198044A (en) Methods of exhibiting, device, medium and the electronic equipment of merchandise news
WO2019240452A1 (en) Method and system for automatically collecting and updating information related to point of interest in real space
CN109308490A (en) Method and apparatus for generating information
CN110443824A (en) Method and apparatus for generating information
CN109409364A (en) Image labeling method and device
CN112150072A (en) Asset checking method and device based on intelligent robot, electronic equipment and medium
CN110119725A (en) For detecting the method and device of signal lamp
CN108509921A (en) Method and apparatus for generating information
CN110263748A (en) Method and apparatus for sending information
CN108132054A (en) For generating the method and apparatus of information
US20210264198A1 (en) Positioning method and apparatus
CN109961501A (en) Method and apparatus for establishing three-dimensional stereo model
CN108133197A (en) For generating the method and apparatus of information
CN109635870A (en) Data processing method and device
CN109903308A (en) For obtaining the method and device of information
CN108512888B (en) Information labeling method, cloud server, system and electronic equipment
JPWO2014027500A1 (en) Feature extraction method, program, and system
CN109345567A (en) Movement locus of object recognition methods, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant