CN108108443A - Character marking method of street view video, terminal equipment and storage medium - Google Patents
Character marking method of street view video, terminal equipment and storage medium Download PDFInfo
- Publication number
- CN108108443A CN108108443A CN201711399597.1A CN201711399597A CN108108443A CN 108108443 A CN108108443 A CN 108108443A CN 201711399597 A CN201711399597 A CN 201711399597A CN 108108443 A CN108108443 A CN 108108443A
- Authority
- CN
- China
- Prior art keywords
- streetscape
- video
- frame
- border
- mark point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/738—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Library & Information Science (AREA)
- Remote Sensing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a method for marking characters of street view video, terminal equipment and a storage medium, wherein the method comprises the following steps: in the street view video, acquiring a current frame according to a click instruction, and adding a marking point in the current frame according to a click position; generating an indication boundary of the street view object according to the region where the annotation point is located; receiving character parameters related to the marking points; and in the street view video, reading the associated frame of the current frame, and matching the marking point and the character parameter when the street view object exists in the associated frame. According to the invention, the streetscape video is output frame by frame, so that the streetscape video is not stretched and deformed, and the problem of unsmooth link between images is avoided, and the display effect is improved. In addition, a marking point and a relevant corresponding character parameter are dynamically added to the street view object in the current frame, and the marking point and the character parameter are matched to the corresponding street view object in the relevant frame, so that the marking speed and the coverage degree of the street view object are improved.
Description
Technical field
The present invention relates to electronic map technique fields more particularly to a kind of label character method, the terminal of streetscape video to set
Standby and storage medium.
Background technology
Streetscape map is a kind of live-action map service, and mainly provide city, street or other environment to the user 360 degree are complete
Scape image, user can obtain map view as if on the spot in person by the service and experience.By streetscape map, as long as being sitting in computer
Before can really see high definition scene on street.The map view experience at " people visual angle " is realized, is provided to the user more
Add true and accurate, the Map Services of richer picture detail.
However, when showing streetscape map by panoramic picture, do not only exist between the easy stretcher strain of image quality, panoramic picture
The problems such as link has some setbacks, and there are problems that not being labeled streetscape content in panoramic picture.
The content of the invention
It is an object of the invention to provide a kind of label character method, terminal device and the storage mediums of streetscape video, lead to
The mode of video rather than panoramic picture is crossed, solves the technology that bandwagon effect in existing streetscape map is bad and mark difficulty is big
Problem.
To solve the above-mentioned problems, the present invention provides a kind of label character method of streetscape video, including:
In streetscape video, present frame is obtained according to click commands, and adds mark in the current frame according to click location
Point;
According to the instruction border of mark point region generation streetscape object;
Receive the text parameter for being associated with mark point;
In streetscape video, read the disassociation frame of present frame, when in disassociation frame there are during streetscape object, matching mark point and
Text parameter.
As a further improvement on the present invention, the instruction border of streetscape object is generated according to mark point region, including:
Pixel region where mark point in the current frame, generation preliminary design border;
When receiving to the adjust instruction on preliminary design border or confirmation instruction, generation instruction border and its corresponding border ginseng
Number.
As a further improvement on the present invention, preliminary design border is generated, including:
Detection streetscape video is that single angle is shown or multi-angle display to streetscape object;
If single angle is shown, then two-dimentional preliminary design border is generated;
If multi-angle display, then three-dimensional preliminary design border is generated.
As a further improvement on the present invention, the text parameter for being associated with mark point is received, including:
Receive input or the text parameter imported, the longitude and latitude of text parameter including streetscape object, title, telephone number,
And/or product introduction;
Text parameter is associated with corresponding mark point.
As a further improvement on the present invention, further include:
Storage mark point, associated boundary parameter and text parameter;
The frame number of present frame and the frame number of the disassociation frame there are streetscape object are stored, and the frame of present frame is compiled
It number is associated with the frame number of disassociation frame to mark point.
As a further improvement on the present invention, further include:
When playing streetscape video, if playing in frame comprising mark point, mark point is shown;
When having a gesture or touching instruction streaks the corresponding instruction border of mark point, a display text corresponding with mark
Word parameter.
As a further improvement on the present invention, further include:
Receive term input by user;
It judges whether and the accurate matched text parameter of term;
If in the presence of according to text parameter in street view database, reading at least one section of streetscape results for video or in streetscape
At least one section of streetscape video segment is read in video;
If being not present, Fuzzy Processing term, until reading streetscape results for video or streetscape video segment.
As a further improvement on the present invention, read at least one section of streetscape results for video or read in streetscape video to
Few one section of streetscape video segment, further includes afterwards:
When obtaining multistage streetscape results for video or multistage streetscape video segment, every section of streetscape results for video or every section are shown
The prime frame and video length of streetscape video segment.
To solve the above-mentioned problems, the present invention also provides a kind of terminal device, including processor, memory and display
Device, processor couple memory, display, the computer program that can be run on a processor are stored on memory;
When processor performs computer program, the label character method of above-mentioned streetscape video is realized.
To solve the above-mentioned problems, the present invention also provides a kind of storage medium, computer program is stored thereon with, is calculated
When machine program is executed by processor, the label character method of above-mentioned streetscape video is realized.
The present invention by exporting streetscape video frame by frame, and both without stretcher strain, also there is no links between image to have some setbacks
The problem of, so as to improve bandwagon effect.Further, present invention dynamic for the streetscape object addition mark point in present frame with
And with the associated text parameter of mark point, and the mark point and text parameter are matched to corresponding streetscape pair in disassociation frame
As the speed for improving mark and the level of coverage to streetscape object convenient for user subsequently based on the text parameter, carry out
The added values applications such as inquiry, search, navigation and advertisement.
Description of the drawings
Fig. 1 is the flow diagram of label character method one embodiment of streetscape video of the present invention;
Fig. 2 is that the terminal interface of terminal device one embodiment of the present invention shows schematic diagram;
Fig. 3 is to indicate that the flow of border product process one embodiment is shown in the label character method of streetscape video of the present invention
It is intended to;
Fig. 4 shows for the flow of preliminary design border product process one embodiment in the label character method of streetscape video of the present invention
It is intended to;
Fig. 5 is that the terminal interface of second embodiment of terminal device of the present invention shows schematic diagram;
The flow that Fig. 6 receives flow one embodiment for text parameter in the label character method of streetscape video of the present invention is shown
It is intended to;
Fig. 7 is the flow diagram of second embodiment of label character method of streetscape video of the present invention;
Fig. 8 is the flow diagram of the 3rd embodiment of label character method of streetscape video of the present invention;
Fig. 9 is the flow diagram of the 4th embodiment of label character method of streetscape video of the present invention;
Figure 10 is that the multiple video frame for including same streetscape object in the label character method of streetscape video of the present invention are shown
It is intended to;
Figure 11 is the circuit theory schematic diagram of terminal device one embodiment of the present invention;
Figure 12 is the high-level schematic functional block diagram of terminal device one embodiment of the present invention;
Figure 13 is the high-level schematic functional block diagram that border generation module one embodiment is indicated in terminal device of the present invention;
Figure 14 is the high-level schematic functional block diagram that preliminary design border generates submodule one embodiment in terminal device of the present invention;
Figure 15 is the high-level schematic functional block diagram of text parameter receiving module one embodiment in terminal device of the present invention;
Figure 16 is the high-level schematic functional block diagram of second embodiment of terminal device of the present invention;
Figure 17 is the high-level schematic functional block diagram of 3rd embodiment of terminal device of the present invention;
Figure 18 is the high-level schematic functional block diagram of 4th embodiment of terminal device of the present invention.
Specific embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, it is right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used to limit the present invention.
Fig. 1 illustrates one embodiment of the label character method of streetscape video of the present invention.In the present embodiment, the streetscape
The label character method of video includes the following steps:
Step S1 in streetscape video, obtains present frame according to click commands, and is added in the current frame according to click location
It marks a little.
In step sl, when playing streetscape video by display screen, the clicking operation that user clicks on display screen, root are received
Click commands are generated according to the clicking operation, and record the click location of the clicking operation.The click commands are performed, it is current to obtain
Frame, and mark point is added according to the click location in the current frame.
It should be noted that the present embodiment is in streetscape video display process, user can be that streetscape object adds in real time
Mark point, and be the mark point real time correlation text parameter, it is achieved thereby that the mark point of streetscape object and moving for text parameter
State is added.
Step S2, according to the instruction border of mark point region generation streetscape object.
Specifically, referring to Fig. 2, the streetscape object in the present embodiment is tower building.In addition, for display more directly perceived originally
The mark point 1 of embodiment, the form of expression of mark point 1 can be annulus, flag, drawing pin etc..The present embodiment by taking annulus as an example into
Row is described in detail.Further, the instruction border 2 in the present embodiment can be streetscape object entirety outer profile or
Streetscape object part region etc..The present embodiment is described in detail by taking the outer profile of streetscape object entirety as an example.
On the basis of above-described embodiment, in other embodiment, referring to Fig. 3, step S2 includes:
Step S20, the pixel region where mark point, generates preliminary design border in the current frame.
On the basis of above-described embodiment, in other embodiment, referring to Fig. 4, step S20 includes:
Step S200, detection streetscape video are that single angle is shown or multi-angle display to streetscape object.If single angle
It has been shown that, then perform step S201.If multi-angle display, then step S202 is performed.
It should be noted that the multi-angle display in the present embodiment refers to realize difference by the idler wheel adjustment of mouse
Angles of display.
Step S201 generates two-dimentional preliminary design border.
Step S202 generates three-dimensional preliminary design border.
The present embodiment generates different preliminary design borders according to the difference of angles of display, to meet the needs of user is different, carries
The usage experience of user is risen.
Step S21, when receiving to the adjust instruction on preliminary design border or confirming instruction, generation instruction border and its corresponding
Boundary parameter.
For technical scheme detailed further, referring to Fig. 5, in video display interface 10, output display
Present frame, the pixel region in present frame where mark point 11, generation preliminary design border 12.Receive tune of the user to preliminary design border 12
After whole instruction, generation instruction border 13 and the boundary parameter for indicating border.Specifically, for example:Mark point 11 is in present frame
2 buildings windows in a building are clicked, preliminary design border 12 may be this building, and it is only the portion of 2 buildings that border 13 is indicated after being adjusted
Subregion.
It should be noted that user inputs there are many modes of adjust instruction, for example:By adjusting the length on preliminary design border 12
Degree and/rear width form instruction border to scale preliminary design border 12.Further, when needed for preliminary design border 12 as user
When indicating border 13, user only needs input validation to instruct, and the mode that user's input validation instructs in the present embodiment has more
Kind, for example:It clicks on and confirms button.
The present embodiment indicates border, when exporting display frame by frame convenient for follow-up streetscape video, only gesture by setting
Or touching instruction is streaked when indicating border, can just be exported display text parameter, be reduced the output data quantity of every frame, so as to both carry
Output speed has been risen, has also improved the effective percentage of text parameter output.
Step S3 receives the text parameter for being associated with mark point.
Specific implementation technology includes:
On each identification point position of current streetscape, by language such as C, C++, Java, Python, Ruby, Matlab, call
OpenCV functions draw the attribute word of each related streetscape object.OpenCV(Open Source Computer Vision
Library) it is computer vision storehouse of increasing income, there is lightweight and efficient feature, be made of a series of C functions and C++ class,
Include cvInitFont, cvPutText, cvGetSize with text output correlation function in OpenCV.
3.1InitFont initializes font architecture body, is defined as:
CvInitFont (CvFont*font, intfont_face, double hscale, double vscale,
Double shear, int thickness, int line_type);
Font represents font architecture body.Fontface represents font name identifier, and hscale represents set width amplification
Coefficient, vscale font height amplification coefficients, shear are font slope, and thickness represents font stroke weight degree,
Line_type represents font stroke type.Function cvInitFont initializes font architecture body and is transferred to text importing function.
3.2PutText shows text-string in the picture, is defined as:
CvPutText (CvArr*img, char*text, CvPoint org, CvFont*font, CvScalar
color);
Rendering text image is wanted in img expressions.Text expressions will show character string, and org represents initial character lower-left angular coordinate.
Font represents font architecture body.Color represents the font color of text.Function cvPutText is by with specific font and face
The text of color is loaded into image.The text being loaded into image is sheared by rectangle frame (ROI rectangle).
3.3GetTextSize obtains the width and height of character string, is defined as:
CvGetTextSize (char*text, CvFont*font, CvSize*text_size, int*baseline).
On the basis of above-described embodiment, in other embodiment, referring to Fig. 6, step S3 includes:
Step S30, receiving input or the text parameter imported, text parameter includes longitude and latitude, title, the connection of streetscape object
It is phone, and/or product introduction.
In step s 30, the text parameter in the present embodiment is either user inputs one by one or user leads
Enter, it might even be possible to be that the two combines.For example:Title can be input by user, and registered place/address information through campsite can
To be that user imports, even after user's input address, businessman can be entered by the way that address importing is corresponding with the address
Information.
Text parameter is associated by step S31 with corresponding mark point.
The present embodiment can both receive text parameter input by user, can also receive the text parameter of user's importing, from
And the acquisition modes of text parameter have been expanded, so that improving the mark rate and user experience of text parameter.
Step S4 in streetscape video, reads the disassociation frame of present frame, when in disassociation frame there are during streetscape object, matching
Mark point and text parameter.
In step s 4, read the disassociation frame of present frame, when in disassociation frame there are during streetscape object, in the disassociation frame
Streetscape object addition mark point, and associate text parameter for the mark point.
The present embodiment is labeled some video frame in streetscape video the association behaviour of addition and text parameter a little
It is unified that identical operation is performed to the disassociation frame with identical streetscape object after work, it further improves in streetscape video and owns
Mark rate and text parameter the association rate of relevant video frame.
The present embodiment by exporting streetscape video frame by frame, and both without stretcher strain, also there is no links between image to have some setbacks
The problem of smooth, so as to improve bandwagon effect.Further, present invention dynamic is the streetscape object addition mark point in present frame
And with the associated text parameter of mark point, and the mark point and text parameter are matched to corresponding streetscape pair in disassociation frame
As the speed for improving mark and the level of coverage to streetscape object convenient for user subsequently based on the text parameter, carry out
The added values applications such as inquiry, search, navigation and advertisement.
The label character method of the streetscape video of the present invention is applied to during the use of terminal device, it is necessary to will mark
Point, boundary parameter and text parameter are stored, and are called subsequently to export during display.Therefore, on the basis of above-described embodiment,
In other embodiment, referring to Fig. 7, the label character method of the streetscape video further includes:
Step S40, storage mark point, associated boundary parameter and text parameter.
In step s 40, mark point, associated boundary parameter and the text parameter of each frame are stored to street view database.
Step S41 stores the frame number of present frame and the frame number of the disassociation frame there are streetscape object, and will be current
The frame number of frame is associated with the frame number of disassociation frame to mark point.
In step S41, different video frame is distinguished by frame number, and frame number and mark point are associated and deposited
Storage.
There are the frame number of each frame of streetscape object and mark point, the borders of each frame for the present embodiment storage immediately
Parameter and text parameter, when showing streetscape video subsequently to export, according to the frame number, output display is corresponding with the frame number
Video frame in mark point, the corresponding instruction border and text parameter of boundary parameter, during therefore, it is necessary to export display, soon
The data message of velocity modulation street view database both avoids mark point, instruction border and text parameter fixation and is shown in each
In video frame, so that the output display data amount of each video frame is excessive, so that output shows that rate is too low.Also avoid phase
The output of the text parameter of neighbour's mark point, which is shown, to be interfered, and the output even more than the text parameter of mark point is shown, is caused
Output display is chaotic, so that user watches obstacle, reduces user experience.
The label character method of the streetscape video of the present invention is applied to during the use of terminal device, only need to show use
Text information needed for family.Therefore, on the basis of above-described embodiment, in other embodiment, referring to Fig. 8, the streetscape video
Label character method further includes:
Step S50 when playing streetscape video, if playing in frame comprising mark point, shows mark point.
Step S51 has detected whether that gesture or touching instruction streak the corresponding instruction border of mark point.When there is cursor
Instruction or touching instruction streak mark point it is corresponding instruction border when, then perform step S52.
It should be noted that the cursor that the gesture in the present embodiment can be mouse enters instruction border.And this reality
It can be that user is directed to the touching signals of display screen input on the instruction border to apply the touching instruction in example.
Step S52, display and the corresponding text parameter of mark point.
The present embodiment exports all mark points in present frame immediately, so that user is known immediately, makes so as to improve user
With experience.In addition, when only marking the instruction border of point into target, just output shows the text parameter of target mark,
Both the output for having avoided the text parameter of adjacent mark point show and interferes, and user's viewing is not easy to, so as to further reduce
User experience, the output for also avoiding the text parameters of multiple mark points show, so that the output of reduction present frame is shown
When showing rate and the output display of multiple text parameters, if without or a small amount of user needed for text parameter, reduce word
Parameter output display is efficient.
The label character method of the streetscape video of the present invention is applied to during the use of terminal device, it is necessary to retrieve use
Streetscape video needed for family, therefore, on the basis of above-described embodiment, in other embodiment, referring to Fig. 9, the streetscape video
Label character method further includes:
Step S60 receives term input by user.
It should be noted that the term inputted in the present embodiment can be longitude and latitude, title and product information etc..
Step S61 is judged whether and the accurate matched text parameter of term.If it is accurately matched in the presence of with term
Text parameter, then perform step S62.If there is no with the accurate matched text parameter of term, perform step S63.
In step S61, accurately matching can be in the present embodiment:The similarity of term and text parameter is obtained, if
The similarity is more than predetermined threshold value, then the text parameter is accurately matched with term.
Step S62 according to text parameter in street view database, reads at least one section of streetscape results for video or in streetscape
At least one section of streetscape video segment is read in video.
Further, on the basis of above-described embodiment, in other embodiment, after step S62, further include:
Step S70 when obtaining multistage streetscape results for video or multistage streetscape video segment, shows every section of streetscape video knot
Fruit or the prime frame and video length of every section of streetscape video segment.
It should be noted that so-called prime frame is mark point corresponding with text parameter or streetscape object, it is complete in the video frame
Whole and accounting it is maximum or video frame default priority area (for example:Golden section point) in.For example, referring to Figure 10, frame I to frame
Frame representated by VI is all comprising streetscape object -- palace, wherein can (accounting is most with preferred frames II (golden section point) or frame III
Prime frame as video segment greatly).
Step S63, Fuzzy Processing term, until reading streetscape results for video or streetscape video segment.
In step S63, Fuzzy Processing term includes but not limited to following operation in the present embodiment:
(1) term is split as multiple keywords, search operaqtion is carried out using multiple keywords;
(2) semanteme of the term is obtained, search operaqtion is carried out using semanteme.
The present embodiment intelligently carries out search operaqtion, both improves intelligence according to the matching degree of term and text information
Process performance also improves user experience.In addition, the master of every section of streetscape results for video of display or every section of streetscape video segment
Frame and video length, so that user quickly knows whether to retrieve the streetscape video needed for user, so as to further improve intelligence
It can process performance and user experience.
Figure 11 be terminal device of the present invention one embodiment, in the present embodiment, the terminal device including memory 2,
Processor 1 and display 3, processor 1 are respectively coupled to memory 2, display 3.
Wherein, processor 1 can be a general central processor, microprocessor, application-specific integrated circuit or one
Or multiple integrated circuits for being used to that application scheme program to be controlled to perform.
In addition, memory 2 can be read-only memory, it is the static storage device that static information and instruction can be stored, random
Access memory or can store information and instruction dynamic memory or Electrically Erasable Programmable Read-Only Memory,
Read-only optical disc or other optical disc storages, optical disc storage, magnetic disk storage medium or other magnetic storage apparatus.Memory 2 and processing
Device 1 can be connected by communication bus, can also be integrated with processor 1.
Above-mentioned memory 2 can be used for the computer program of storage execution application scheme, and processor 1 can be used for performing storage
The computer program stored in device 2, to realize the label character method of the streetscape video of above-described embodiment description.
Specifically, referring to Figure 12, in the present embodiment, which includes mark point add module 10, instruction border
Generation module 11, text parameter receiving module 12 and disassociation frame matching module 13.
Wherein, in streetscape video, present frame is obtained according to click commands for a mark point add module 10, and according to
Click location adds mark point in the current frame;Border generation module 11 is indicated, for according to mark point region generation street
The instruction border of scape object;Text parameter receiving module 12, for receiving the text parameter for being associated with mark point;Disassociation frame matches
Module 13, in streetscape video, reading the disassociation frame of present frame, when, there are during streetscape object, matching marks in disassociation frame
Point and text parameter.
On the basis of above-described embodiment, in other embodiment, referring to Figure 13, instruction border generation module 11 includes preliminary design
Border generates submodule 110 and instruction border generation submodule 111.
Wherein, preliminary design border generation submodule 110, for marking a pixel region at place in the current frame, generation is just
If border;Indicate border generation submodule 111, when being instructed for receiving to the adjust instruction on preliminary design border or confirmation, generation
Indicate border and its corresponding boundary parameter.
On the basis of above-described embodiment, in other embodiment, referring to Figure 14, preliminary design border generation submodule 110 includes
Angles of display detection unit 1101, two-dimentional preliminary design border generation unit 1102 and three-dimensional preliminary design border generation unit 1103.
Wherein, angles of display detection unit 1101, be for detecting streetscape video to streetscape object single angle show or
Multi-angle display;Two-dimentional preliminary design border generation unit 1102 if being shown for single angle, then generates two-dimentional preliminary design border;Three
Preliminary design border generation unit 1103 is tieed up, if for multi-angle display, then generates three-dimensional preliminary design border.
On the basis of above-described embodiment, in other embodiment, referring to Figure 15, text parameter receiving module 12 includes word
Parameter receiving unit 120 and text parameter associative cell 121.
Wherein, text parameter receiving unit 120, for the text parameter for receiving input or importing, text parameter includes street
Longitude and latitude, title, telephone number, and/or the product introduction of scape object;Text parameter associative cell 121, for by text parameter
It is associated with corresponding mark point.
On the basis of above-described embodiment, in other embodiment, referring to Figure 16, which further includes memory module 20
With frame number relating module 21.
Wherein, memory module 20, for storing mark point, associated boundary parameter and text parameter;Frame number associates mould
Block 21, the frame number of the disassociation frame for the frame number for storing present frame and there are streetscape object, and the frame of present frame is compiled
It number is associated with the frame number of disassociation frame to mark point.
On the basis of above-described embodiment, in other embodiment, referring to Figure 17, which further includes mark point display
Module 30, detection module 31 and text parameter display module 32.
Wherein, mark point display module 30, for when playing streetscape video, mark point, display to be included in frame if playing
Mark point;Detection module 31, for detecting whether thering is gesture or touching instruction to streak the corresponding instruction border of mark point;Text
Word parameter display module 32, for when having a gesture or touching instruction streaks the corresponding instruction border of mark point, display with
The corresponding text parameter of mark point.
On the basis of above-described embodiment, in other embodiment, referring to Figure 18, which further includes term reception
Module 40, accurate matching module 41, read module 42, Fuzzy Processing module 43 and retrieval result display module 50.
Wherein, term receiving module 40, for receiving term input by user;Accurate matching module 41, for sentencing
It is disconnected to whether there is and the accurate matched text parameter of term;Read module 42, if in the presence of according to text parameter in street
In scape database, read at least one section of streetscape results for video or at least one section of streetscape video segment is read in streetscape video;Mould
Processing module 43 is pasted, if for being not present, Fuzzy Processing term, until reading streetscape results for video or streetscape piece of video
Section.
Further, which further includes retrieval result display module 50.Wherein, retrieval result display module 50,
For when obtaining multistage streetscape results for video or multistage streetscape video segment, showing every section of streetscape results for video or every section of streetscape
The prime frame and video length of video segment.
The other details of technical solution are realized on each module in above-mentioned seven embodiment terminal devices, reference can be made to above-mentioned reality
The description in the label character method of the streetscape video in example is applied, details are not described herein again.
It should be noted that each embodiment in this specification is described by the way of progressive, each embodiment weight
Point explanation is all difference from other examples, and just to refer each other for identical similar part between each embodiment.
For device class embodiment, since it is basicly similar to embodiment of the method, so description is fairly simple, related part ginseng
See the part explanation of embodiment of the method.
The embodiment of the present application additionally provides a kind of storage medium, and for storing computer program, it includes for performing sheet
Apply for the computer program designed by the label character embodiment of the method for above-mentioned streetscape video.It is deposited by performing in the storage medium
The computer program of storage can realize the label character method for the streetscape video that the application provides.
The specific embodiment of invention is described in detail above, but it is only used as example, and the present invention is not intended to limit
With specific embodiments described above.For a person skilled in the art, any equivalent modifications carried out to the invention
Or substitute also all among scope of the invention, therefore, the equalization made in the case where not departing from the spirit and principles in the present invention scope
Conversion and modification, improvement etc., all should be contained within the scope of the invention.
Claims (10)
- A kind of 1. label character method of streetscape video, which is characterized in that including:In streetscape video, present frame is obtained according to click commands, and adds mark in the present frame according to click location Point;According to the instruction border of the mark point region generation streetscape object;Receive the text parameter for being associated with the mark point;In the streetscape video, read the disassociation frame of the present frame, when in the disassociation frame there are during the streetscape object, Match the mark point and the text parameter.
- 2. the label character method of streetscape video according to claim 1, which is characterized in that reception is associated with the mark The text parameter of point, including:Input or the text parameter imported are received, the text parameter includes longitude and latitude, title, the contact electricity of the streetscape object Words, and/or product introduction;The text parameter is associated with corresponding mark point.
- 3. the label character method of streetscape video according to claim 1, which is characterized in that according to where the mark point The instruction border of Area generation streetscape object, including:The pixel region where point, generation preliminary design border are marked described in the present frame;When receiving to the adjust instruction on the preliminary design border or confirmation instruction, generation instruction border and its corresponding border ginseng Number.
- 4. the label character method of streetscape video according to claim 3, which is characterized in that generation preliminary design border, including:It is that single angle is shown or multi-angle display to the streetscape object to detect the streetscape video;If single angle is shown, then two-dimentional preliminary design border is generated;If multi-angle display, then three-dimensional preliminary design border is generated.
- 5. the label character method of streetscape video according to claim 3, which is characterized in that further include:Store the mark point, the associated boundary parameter and the text parameter;Store the frame number of present frame and the frame number of disassociation frame there are the streetscape object, and by the present frame Frame number is associated with the frame number of the disassociation frame to the mark point.
- 6. the label character method of streetscape video according to any one of claim 1 to 5, which is characterized in that further include:When playing streetscape video, if playing in frame comprising the mark point, the mark point is shown;When having a gesture or touching instruction streaks the corresponding instruction border of the mark point, display is corresponding with the mark Text parameter.
- 7. the label character method of streetscape video according to any one of claim 1 to 5, which is characterized in that further include:Receive term input by user;It judges whether and the accurate matched text parameter of the term;If in the presence of according to the text parameter in street view database, reading at least one section of streetscape results for video or in streetscape At least one section of streetscape video segment is read in video;If being not present, term described in Fuzzy Processing, until reading streetscape results for video or streetscape video segment.
- 8. the label character method of streetscape video according to claim 7, which is characterized in that read at least one section of streetscape and regard Frequency result reads at least one section of streetscape video segment in streetscape video, further includes afterwards:When obtaining streetscape video segment described in streetscape results for video or multistage described in multistage, show every section of streetscape results for video or The prime frame and video length of every section of streetscape video segment.
- 9. a kind of terminal device, which is characterized in that it includes processor, memory and display, described in the processor coupling Memory, the display are stored with the computer program that can be run on the processor on the memory;When the processor performs the computer program, the text of the streetscape video any one of claim 1-8 is realized Word mask method.
- 10. a kind of storage medium, is stored thereon with computer program, which is characterized in that the computer program is held by processor During row, the label character method of the streetscape video any one of claim 1-8 is realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711399597.1A CN108108443A (en) | 2017-12-21 | 2017-12-21 | Character marking method of street view video, terminal equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711399597.1A CN108108443A (en) | 2017-12-21 | 2017-12-21 | Character marking method of street view video, terminal equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108108443A true CN108108443A (en) | 2018-06-01 |
Family
ID=62211645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711399597.1A Pending CN108108443A (en) | 2017-12-21 | 2017-12-21 | Character marking method of street view video, terminal equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108108443A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109859612A (en) * | 2019-01-16 | 2019-06-07 | 中德(珠海)人工智能研究院有限公司 | A kind of method and its system of the three-dimensional annotation of streetscape data |
CN110084895A (en) * | 2019-04-30 | 2019-08-02 | 上海禾赛光电科技有限公司 | The method and apparatus that point cloud data is labeled |
CN110751149A (en) * | 2019-09-18 | 2020-02-04 | 平安科技(深圳)有限公司 | Target object labeling method and device, computer equipment and storage medium |
CN110796715A (en) * | 2019-08-26 | 2020-02-14 | 腾讯科技(深圳)有限公司 | Electronic map labeling method, device, server and storage medium |
CN111179271A (en) * | 2019-11-22 | 2020-05-19 | 浙江众合科技股份有限公司 | Object angle information labeling method based on retrieval matching and electronic equipment |
CN111866488A (en) * | 2020-07-23 | 2020-10-30 | 深圳市福莱斯科数据开发有限公司 | Editing system and editing method based on panoramic image |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101082926A (en) * | 2007-07-03 | 2007-12-05 | 浙江大学 | Modeling approachused for trans-media digital city scenic area |
CN103268730A (en) * | 2013-06-03 | 2013-08-28 | 北京奇虎科技有限公司 | Method and device for displaying associated dimension points on electric map interface |
CN104504054A (en) * | 2014-12-19 | 2015-04-08 | 深圳先进技术研究院 | Location character display method and system based on streetscape attribute information |
-
2017
- 2017-12-21 CN CN201711399597.1A patent/CN108108443A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101082926A (en) * | 2007-07-03 | 2007-12-05 | 浙江大学 | Modeling approachused for trans-media digital city scenic area |
CN103268730A (en) * | 2013-06-03 | 2013-08-28 | 北京奇虎科技有限公司 | Method and device for displaying associated dimension points on electric map interface |
CN104504054A (en) * | 2014-12-19 | 2015-04-08 | 深圳先进技术研究院 | Location character display method and system based on streetscape attribute information |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109859612A (en) * | 2019-01-16 | 2019-06-07 | 中德(珠海)人工智能研究院有限公司 | A kind of method and its system of the three-dimensional annotation of streetscape data |
CN110084895A (en) * | 2019-04-30 | 2019-08-02 | 上海禾赛光电科技有限公司 | The method and apparatus that point cloud data is labeled |
CN110084895B (en) * | 2019-04-30 | 2023-08-22 | 上海禾赛科技有限公司 | Method and equipment for marking point cloud data |
CN110796715A (en) * | 2019-08-26 | 2020-02-14 | 腾讯科技(深圳)有限公司 | Electronic map labeling method, device, server and storage medium |
CN110796715B (en) * | 2019-08-26 | 2023-11-24 | 腾讯科技(深圳)有限公司 | Electronic map labeling method, device, server and storage medium |
CN110751149A (en) * | 2019-09-18 | 2020-02-04 | 平安科技(深圳)有限公司 | Target object labeling method and device, computer equipment and storage medium |
CN110751149B (en) * | 2019-09-18 | 2023-12-22 | 平安科技(深圳)有限公司 | Target object labeling method, device, computer equipment and storage medium |
CN111179271A (en) * | 2019-11-22 | 2020-05-19 | 浙江众合科技股份有限公司 | Object angle information labeling method based on retrieval matching and electronic equipment |
CN111866488A (en) * | 2020-07-23 | 2020-10-30 | 深圳市福莱斯科数据开发有限公司 | Editing system and editing method based on panoramic image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108108443A (en) | Character marking method of street view video, terminal equipment and storage medium | |
US11595737B2 (en) | Method for embedding advertisement in video and computer device | |
Jian et al. | Integrating QDWD with pattern distinctness and local contrast for underwater saliency detection | |
CN108830780B (en) | Image processing method and device, electronic device and storage medium | |
CN109308469B (en) | Method and apparatus for generating information | |
CN109300179B (en) | Animation production method, device, terminal and medium | |
US20110313859A1 (en) | Techniques for advertiser geotargeting using map coordinates | |
CN108255961A (en) | Image annotation method of street view video, terminal device and storage medium | |
CN111553362B (en) | Video processing method, electronic device and computer readable storage medium | |
CN106648319A (en) | Operation method and apparatus used for mind map | |
US20220375220A1 (en) | Visual localization method and apparatus | |
CN109121000A (en) | A kind of method for processing video frequency and client | |
EP3945456B1 (en) | Video event recognition method and apparatus, electronic device and storage medium | |
CN111415298A (en) | Image splicing method and device, electronic equipment and computer readable storage medium | |
CN111984803B (en) | Multimedia resource processing method and device, computer equipment and storage medium | |
CN111223155B (en) | Image data processing method, device, computer equipment and storage medium | |
CN114359932B (en) | Text detection method, text recognition method and device | |
CN113516697B (en) | Image registration method, device, electronic equipment and computer readable storage medium | |
WO2021136224A1 (en) | Image segmentation method and device | |
CN108170754A (en) | Website labeling method of street view video, terminal device and storage medium | |
CN112818908A (en) | Key point detection method, device, terminal and storage medium | |
WO2023138558A1 (en) | Image scene segmentation method and apparatus, and device and storage medium | |
CN104917963A (en) | Image processing method and terminal | |
CN111915532A (en) | Image tracking method and device, electronic equipment and computer readable medium | |
CN111914850A (en) | Picture feature extraction method, device, server and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180601 |