CN109886201A - Monitoring image mask method and device - Google Patents
Monitoring image mask method and device Download PDFInfo
- Publication number
- CN109886201A CN109886201A CN201910133988.1A CN201910133988A CN109886201A CN 109886201 A CN109886201 A CN 109886201A CN 201910133988 A CN201910133988 A CN 201910133988A CN 109886201 A CN109886201 A CN 109886201A
- Authority
- CN
- China
- Prior art keywords
- target
- video camera
- marked
- gis
- target video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Studio Devices (AREA)
Abstract
The embodiment of the present application provides a kind of monitoring image mask method and device, by the first position for obtaining more than one object to be marked from GIS-Geographic Information System, and the second position of the acquisition with the corresponding target acquisition parameters of the target shooting area of target video camera and the target video camera determined in the GIS-Geographic Information System, so that it is determined that the object to be marked is in the target position of the imaging point of the imaging plane of the target video camera, for marking the markup information of the object to be marked.In this way, batch automatic marking can not only be carried out to multiple objects to be marked, moreover it is possible to be labeled in the monitoring image of multiple video cameras to same object to be marked.
Description
Technical field
This application involves protection and monitor fields, in particular to monitoring image mask method and device.
Background technique
In some monitoring scenes, user needs to check information (e.g., the ground of target position in the monitoring image of video camera
Name, building name etc.), it needs to be labeled the information of the target position in monitoring image thus.In traditional mark
In note scheme, the letter of each target position can only be marked one by one by way of being manually entered in the monitoring image of single camera
Breath, is inconvenient.
Summary of the invention
In view of this, the first purpose of the application is to provide a kind of monitoring image mask method and device, at least portion
It solves the above problems with dividing.
In order to achieve the above object, the following technical solutions are proposed for the embodiment of the present application:
In a first aspect, the embodiment of the present application provides a kind of monitoring image mask method, which comprises
The first position of more than one object to be marked is obtained from GIS-Geographic Information System, is remembered in the GIS-Geographic Information System
Record has the markup information of the object to be marked;
It determines more than one target video camera, and obtains target corresponding with the target shooting area of the target video camera
The second position of acquisition parameters and the target video camera in the GIS-Geographic Information System;
Determine that the object to be marked exists according to the first position, the second position and the target acquisition parameters
The target position of the imaging point of the imaging plane of the target video camera, for marking the markup information of the object to be marked.
Second aspect, the embodiment of the present application provide a kind of monitoring image annotation equipment, and described device includes:
First position obtains module, for obtaining first of more than one object to be marked from GIS-Geographic Information System
It sets, record has the markup information of the object to be marked in the GIS-Geographic Information System;
The second position obtains module, for determining more than one target video camera, and obtains and the target video camera
The second position of the corresponding target acquisition parameters of target shooting area and the target video camera in the GIS-Geographic Information System;
Target position determining module is joined for being shot according to the first position, the second position and the target
Number determines the target position of the imaging point of imaging plane of the object to be marked in the target video camera, described for marking
The markup information of object to be marked.
The third aspect, the embodiment of the present application provide a kind of monitoring image mask method, which comprises
Determine the second position of the target video camera in GIS-Geographic Information System;
According to the second position and target acquisition parameters, determine the target video camera with the target acquisition parameters energy
The region enough taken is as the target shooting area;
Determine that more than one target for being located at the target shooting area marks object from multiple mark objects, describedly
Record has the markup information of the multiple mark object in reason information system;
The first position of one above target mark object is obtained from the GIS-Geographic Information System;
The target mark object is determined according to the first position, the second position and the target acquisition parameters
In the target position of the imaging point of the imaging plane of the target video camera, for marking the mark letter of the target mark object
Breath.
Fourth aspect, the embodiment of the present application provide a kind of monitoring image annotation equipment, and described device includes:
First determining module, for determining the second position of the target video camera in GIS-Geographic Information System;
Second determining module, for according to the second position and target acquisition parameters, determine the target video camera with
The region that the target acquisition parameters can take is as the target shooting area;
Third determining module, for determining more than one mesh for being located at the target shooting area from multiple mark objects
Mark marks object, and record has the markup information of the multiple mark object in the GIS-Geographic Information System;
Module is obtained, for obtaining first of one above target mark object from the GIS-Geographic Information System
It sets;
4th determining module, for true according to the first position, the second position and the target acquisition parameters
The target position of the imaging point of imaging plane of the fixed target mark object in the target video camera, for marking the mesh
The markup information of mark mark object.
In terms of existing technologies, the beneficial effect of the application includes:
The embodiment of the present application provides a kind of monitoring image mask method and device, by obtaining one from GIS-Geographic Information System
The first position of a above object to be marked, and obtain target corresponding with the target shooting area of target video camera determined and clap
The second position of parameter and the target video camera in the GIS-Geographic Information System is taken the photograph, so that it is determined that the object to be marked exists
The target position of the imaging point of the imaging plane of the target video camera, for marking the markup information of the object to be marked.
In this way, batch automatic marking can not only be carried out to multiple objects to be marked, moreover it is possible to right in the monitoring image of multiple video cameras
Same object to be marked is labeled.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only some embodiments of the application, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 is a kind of block diagram of data processing equipment provided by the embodiments of the present application;
Fig. 2 is a kind of flow diagram of monitoring image mask method provided by the embodiments of the present application;
Fig. 3 is a seed step schematic diagram of step S203 in Fig. 2;
Fig. 4 is a kind of functional block diagram of monitoring image annotation equipment provided by the embodiments of the present application;
Fig. 5 is the flow diagram of another monitoring image mask method provided by the embodiments of the present application;
Fig. 6 is the functional block diagram of another monitoring image annotation equipment provided by the embodiments of the present application.
Icon: 10- data processing equipment;11- machine readable storage medium;12- processor;400,600- monitoring image mark
Dispensing device;The first position 410- obtains module;The second position 420- obtains module;The target position 430- determining module;431- at
As plane determines submodule;432- imaging point determines submodule;433- cursor position determines submodule;440- labeling module;610-
First determining module;The second determining module of 620-;630- third determining module;640- obtains module;The 4th determining module of 650-.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application
In attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is
Some embodiments of the present application, instead of all the embodiments.The application being usually described and illustrated herein in the accompanying drawings is implemented
The component of example can be arranged and be designed with a variety of different configurations.
Therefore, the detailed description of the embodiments herein provided in the accompanying drawings is not intended to limit below claimed
Scope of the present application, but be merely representative of the selected embodiment of the application.Based on the embodiment in the application, this field is common
Technical staff's every other embodiment obtained without creative efforts belongs to the model of the application protection
It encloses.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.
Fig. 1 is please referred to, Fig. 1 is a kind of data processing equipment 10 provided by the embodiments of the present application, the data processing equipment
It can be server, personal computer etc. arbitrarily with the equipment of data processing function and communication function.Wherein, the server
Can be individual server or be in communication with each other server composition server cluster.
In the present embodiment, data processing equipment 10 can be the equipment that operation has GIS-Geographic Information System, such as can be
Any server in the server cluster of GIS-Geographic Information System is run, data processing equipment 10 can also be and the geographical letter of operation
The other equipment of the equipment communication of breath system.
It is being in computer that the GIS-Geographic Information System (GIS, Geographic Information System), which is one kind,
Hardware and software system support under, to the related geographic distribution data in earth surface layer (including atmosphere) space all or in part into
Row acquisition, storage, management, technological system operation, analyzed, be shown and described.In the GIS-Geographic Information System, pass through three-dimensional figure
As being reproduced to real world.
The data processing equipment 10 may include processor 12 and machine readable storage medium 11.Processor 12 with it is machine readable
Storage medium 11 can be communicated via system bus.Also, machine readable storage medium 11 is stored with machine-executable instruction, passes through
It reads and executes machine-executable instruction corresponding with monitoring image mark logic, processor 12 in machine readable storage medium 11
The executable monitoring image mask method that will be described below.
Machine readable storage medium 11 referred to herein can be any electronics, magnetism, optics or other physical stores
Device may include or store information, such as executable instruction, data, etc..For example, machine readable storage medium 11 may is that
RAM (Radom Access Memory, random access memory), volatile memory, nonvolatile memory, flash memory, storage are driven
Dynamic device (such as hard disk drive), solid state hard disk, any kind of storage dish (such as CD, dvd) or similar storage are situated between
Matter or their combination.
It should be appreciated that structure shown in FIG. 1 is merely illustrative, data processing equipment 10 can also include more than shown in Fig. 1
Or less component, or with configuration entirely different shown in Fig. 1.Wherein, each component shown in FIG. 1 can with hardware,
Software or combinations thereof is realized.
It is a kind of flow diagram of monitoring image mask method provided by the embodiments of the present application referring again to Fig. 2, Fig. 2.
The monitoring image mask method can be applied to data processing equipment 10 shown in FIG. 1, below to the method includes it is each
Step is described in detail.
Step S201 obtains the first position of more than one object to be marked, describedly from the GIS-Geographic Information System
Record has the markup information of the object to be marked in reason information system.
Wherein, the object to be marked includes the region in epigeosphere space, building etc..
Can be had in the present embodiment, in the GIS-Geographic Information System for define all or in part earth surface layer (including
Atmosphere) each position in space geographic coordinate system, which includes Beijing 54, Xi'an 80 and WGS (World
Geodetic System, world geodetic system) 84.Under the geographic coordinate system, it can determine one above to be marked
The coordinate of the first position of object.
Step S202 determines more than one target video camera, and obtains the target shooting area with the target video camera
The second position of corresponding target acquisition parameters and the target video camera in the GIS-Geographic Information System.
In the present embodiment, the second position of the target video camera in the GIS-Geographic Information System refers to the target camera shooting
The installation site of machine, the installation site also have corresponding coordinate under the geographical coordinate.The target acquisition parameters can be
The one group of acquisition parameters (including rotational parameters and lens parameters) of the target video camera at any one time.When the target is shot
When region includes at least one described object to be marked, i.e., when the target video camera is shot with the target acquisition parameters
When can take at least one described object to be marked, can at least one be to be marked to described by following step S203
Object is labeled.
Step S203, according to the first position, the second position and the target acquisition parameters determine it is described to
Object is marked in the target position of the imaging point of the imaging plane of the target video camera, for marking the object to be marked
Markup information.
In the present embodiment, the object to be marked is in the target position of the imaging point of the imaging plane of the target video camera
Position of the i.e. described object to be marked in the monitoring image when the target video camera is shot with the target acquisition parameters
It sets.Include the either objective video camera of the object to be marked for target shooting area, can determine the object to be marked
The position in monitoring image when the target video camera is shot with the target acquisition parameters, thus by described to be marked
The markup information of object is marked in the position.Therefore, above-mentioned monitoring image mask method can not only be to multiple objects to be marked
Carry out batch automatic marking, moreover it is possible to be labeled in the monitoring image of multiple video cameras to same object to be marked.
Optionally, step S203 may include sub-step shown in Fig. 3.
Step S301, according to the second position and the target acquisition parameters determine the target video camera focus and
The position of imaging plane.
In the present embodiment, determine that the installation site (that is, described second position) of the target video camera and the target are clapped
After taking the photograph parameter, that is, it can determine the position of the imaging plane when target video camera is shot with the target acquisition parameters.
Step S302 determines the line of the position of the first position and the focus and the intersection point of the imaging plane,
Using the intersection point as the imaging point.
According to the image-forming principle of video camera, the position (that is, described first position) of the object to be marked and the target
The intersection point of the continuous and imaging plane of the position of the focus of video camera, that is, object to be marked is on the imaging surface
Imaging point.
The position of the intersection point is determined as the target position by step S303.
In some embodiments, user can specify a video camera as the target video camera.When user can not determine
When whether specified video camera can take the object to be marked, data processing equipment 10 be may determine that more than one
With the presence or absence of the object to be marked except the target shooting area of the target video camera in object to be marked.If so, not right
Object to be marked except the target shooting area is labeled.For not to be marked in the target shooting area
Object can also show that the object to be marked can not be photographed other than not being labeled to the object to be marked to user
And/or the prompt information that can not be marked.For the object to be marked in the target shooting area, continues determination and this is waited for
Object is marked in the target position of the imaging point of the imaging plane of the target video camera.
In other embodiments, it can be determined that whether the target shooting area of each default video camera includes described to be marked
The default video camera that target shooting area includes the first position of the object to be marked is determined as by the first position of object
The target video camera.In this way, the markup information of the object to be marked can be marked for single object to be marked to all
It can take in the monitoring image of default video camera of the object to be marked.
Further, as shown in Fig. 2, the method can also include step S204.
Step S204 marks the markup information of the object to be marked in the target position.
It is worth noting that being taken the photograph after video camera is labeled the object in the monitoring image under one group of acquisition parameters
The acquisition parameters of camera change, and the object can be calculated according to related algorithm (e.g., digital tripod head algorithm) after variation
The position in monitoring image under acquisition parameters, and the markup information of the object is correspondingly adjusted to the position.Therefore, the application
Monitoring figure when monitoring image mask method described in embodiment need to only be shot for video camera with specific acquisition parameters
It, can corresponding adjustment when video camera converts acquisition parameters to the position of markup information as being labeled.
It is a kind of functional module of monitoring image annotation equipment 400 provided by the embodiments of the present application referring again to Fig. 4, Fig. 4
Block diagram.Monitoring image annotation equipment 400 can including at least one machine that can be stored in data processing equipment 10 in a software form
Read the functional module in storage medium 11.It functionally divides, monitoring image annotation equipment 400 may include that first position obtains
Module 410, the second position obtain module 420 and target position determining module 430.
First position obtains first that module 410 is used to obtain more than one object to be marked from GIS-Geographic Information System
It sets, record has the markup information of the object to be marked in the GIS-Geographic Information System.
In the present embodiment, the description as described in first position obtains module 410 be can refer to the detailed of step S201 shown in Fig. 2
Description, i.e. step S201 can obtain module 410 by first position and execute.
The second position obtains module 420 for determining more than one target video camera, and obtains and the target video camera
Second in the GIS-Geographic Information System of the corresponding target acquisition parameters of target shooting area and the target video camera
It sets.
In the present embodiment, the description as described in the second position obtains module 420 be can refer to the detailed of step S202 shown in Fig. 2
Description, i.e. step S202 can obtain module 420 by the second position and execute.
Target position determining module 430 is used to be shot according to the first position, the second position and the target
Parameter determines the object to be marked in the target position of the imaging point of the imaging plane of the target video camera, for marking
State the markup information of object to be marked.
In the present embodiment, the description as described in target position determining module 430 be can refer to the detailed of step S203 shown in Fig. 2
Description, i.e. step S203 can be executed by target position determining module 430.
Optionally, target position determining module 430 may include that imaging plane determines that submodule 431, imaging point determine son
Module 432 and target position determine submodule 433.
Imaging plane determines submodule 431 for determining the mesh according to the second position and the target acquisition parameters
Mark the focus of video camera and the position of imaging plane.
In the present embodiment, the description as described in imaging plane determines submodule 431 be can refer to the detailed of step S301 shown in Fig. 3
Thin description, i.e. step S301 can be determined that submodule 431 is executed by imaging plane.
Imaging point determine submodule 432 for determine the first position and the focus position line and it is described at
As the intersection point of plane, using the intersection point as the imaging point.
In the present embodiment, the description as described in imaging plane determines submodule 431 be can refer to the detailed of step S302 shown in Fig. 3
Thin description, i.e. step S302 can be determined that submodule 432 is executed by imaging point.
Target position determines submodule 433 for the position of the intersection point to be determined as the target position.
In the present embodiment, the description as described in imaging plane determines submodule 431 be can refer to the detailed of step S303 shown in Fig. 3
Thin description, i.e. step S303 can be determined that submodule 433 is executed by target position.
Optionally, monitoring image annotation equipment 400 can also include labeling module 440.
Labeling module 440 is used to mark the markup information of the object to be marked in the target position.
In the present embodiment, the description as described in labeling module 440 can refer to the detailed description to step S204 shown in Fig. 2, i.e.,
Step S204 can be executed by labeling module 440.
Referring to figure 5., Fig. 5 is the flow diagram of another monitoring image mask method provided by the embodiments of the present application.
This method also can be applied to data processing equipment 10 shown in FIG. 1, carry out below to each step that this method includes detailed
It illustrates.
Step S501 determines the second position of the target video camera in the GIS-Geographic Information System.
Step S502 determines the target video camera with the target according to the second position and target acquisition parameters
The region that acquisition parameters can take is as the target shooting area.
Step S503 determines more than one target mark pair for being located at the target shooting area from multiple mark objects
As record has the markup information of the multiple mark object in the GIS-Geographic Information System.
Step S504 obtains the first position of one above target mark object from the GIS-Geographic Information System.
Step S505 determines the mesh according to the first position, the second position and the target acquisition parameters
Mark mark object is in the target position of the imaging point of the imaging plane of the target video camera, for marking the target mark pair
The markup information of elephant.
Optionally, true according to the first position, the second position and the target acquisition parameters in step S505
The concrete mode of the target position of the imaging point of imaging plane of the fixed target mark object in the target video camera can be with
Referring to sub-step shown in Fig. 3, which is not described herein again.
Referring to foregoing teachings, monitoring image mask method described herein passes through the target shooting in the target video camera
Determine that more than one target marks object in region, it is ensured that the target mark object is flat in the imaging of the target video camera
Centainly there is imaging point on face.In this way, batch can be carried out to multiple targets mark object in the monitoring image of single camera
Automatic marking.
It is the functional module frame of another monitoring image annotation equipment provided by the embodiments of the present application referring again to Fig. 6, Fig. 6
Figure.Monitoring image annotation equipment 600 includes that at least one can be stored in the machine readable of data processing equipment 10 in a software form
Functional module in storage medium 11.It functionally divides, monitoring image annotation equipment 600 may include the first determining module
610, the second determining module 620, third determining module 630, acquisition module 640 and the 4th determining module 650.
First determining module 610 is for determining the second position of the target video camera in GIS-Geographic Information System.
In the present embodiment, the description as described in first position obtains module 410 be can refer to the detailed of step S201 shown in Fig. 2
Description, i.e. step S201 can obtain module 410 by first position and execute.
Second determining module 620 is used to determine the target video camera according to the second position and target acquisition parameters
Using the region that the target acquisition parameters can take as the target shooting area.
In the present embodiment, the description as described in the second determining module 620 can refer to retouching in detail to step S502 shown in Fig. 5
It states, i.e. step S502 can be executed by the second determining module 620.
Third determining module 630 is used to determine more than one for being located at the target shooting area from multiple mark objects
Target marks object, and record has the markup information of the multiple mark object in the GIS-Geographic Information System.
In the present embodiment, the description as described in third determining module 630 can refer to retouching in detail to step S503 shown in Fig. 5
It states, i.e. step S503 can be executed by third determining module 630.
Obtain module 640 is used to obtain one above target mark object from the GIS-Geographic Information System first
Position.
In the present embodiment, the description as described in obtaining module 640 can refer to the detailed description to step S504 shown in Fig. 5, i.e.,
Step S504 can be executed by acquisition module 640.
4th determining module 650 is used for according to the first position, the second position and the target acquisition parameters
Determine the target position of the imaging point of imaging plane of the target mark object in the target video camera, it is described for marking
The markup information of target mark object.
In the present embodiment, the description as described in the 4th determining module 650 can refer to retouching in detail to step S505 shown in Fig. 5
It states, i.e. step S505 can be executed by the 4th determining module 650.
In conclusion the embodiment of the present application provides a kind of monitoring image mask method and device, by from geography information system
The first position of more than one object to be marked is obtained in system, and obtains the target shooting area pair with the target video camera determined
The second position of the target acquisition parameters and the target video camera answered in the GIS-Geographic Information System, so that it is determined that it is described to
Object is marked in the target position of the imaging point of the imaging plane of the target video camera, for marking the object to be marked
Markup information.In this way, batch automatic marking can not only be carried out to multiple objects to be marked, moreover it is possible in the monitoring of multiple video cameras
Same object to be marked is labeled in image.
In embodiment provided herein, it should be understood that disclosed method, apparatus and system can also lead to
Other modes are crossed to realize.The apparatus embodiments described above are merely exemplary, for example, flow chart and frame in attached drawing
Figure shows the system frame in the cards of the device of multiple embodiments according to the application, method and computer program product
Structure, function and operation.In this regard, each box in flowchart or block diagram can represent a module, section or code
A part, a part of the module, section or code includes one or more for implementing the specified logical function
Executable instruction.It should also be noted that function marked in the box can also be with not in some implementations as replacement
It is same as the sequence marked in attached drawing generation.For example, two continuous boxes can actually be basically executed in parallel, they have
When can also execute in the opposite order, this depends on the function involved.It is also noted that in block diagram and or flow chart
Each box and the box in block diagram and or flow chart combination, can function or movement as defined in executing it is dedicated
Hardware based system realize, or can realize using a combination of dedicated hardware and computer instructions.
In addition, each functional module in each embodiment of the application can integrate one independent portion of formation together
Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
It, can be with if the function is realized and when sold or used as an independent product in the form of software function module
It is stored in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a
People's computer, server or network equipment etc.) execute each embodiment the method for the application all or part of the steps.
And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any
Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain
Lid is within the scope of protection of this application.Therefore, the protection scope of the application shall be subject to the protection scope of the claim.
Claims (10)
1. a kind of monitoring image mask method characterized by comprising
The first position of more than one object to be marked is obtained from GIS-Geographic Information System, recording in the GIS-Geographic Information System has
The markup information of the object to be marked;
It determines more than one target video camera, and obtains target shooting corresponding with the target shooting area of the target video camera
The second position of parameter and the target video camera in the GIS-Geographic Information System;
Determine the object to be marked described according to the first position, the second position and the target acquisition parameters
The target position of the imaging point of the imaging plane of target video camera, for marking the markup information of the object to be marked.
2. the method according to claim 1, wherein according to the first position, the second position and institute
Stating acquisition parameters determines the object to be marked in the target position of the imaging point of the imaging plane of the target video camera, packet
It includes:
The focus of the target video camera and the position of imaging plane are determined according to the second position and the target acquisition parameters
It sets;
The line of the first position and the second position and the intersection point of the imaging plane are determined, using the intersection point as described in
Imaging point;
The position of the intersection point is determined as the target position.
3. method according to claim 1 or 2, which is characterized in that more than one target video camera of the determination, comprising:
The video camera that user is specified is determined as the target video camera;
The method also includes:
Judge in one above object to be marked with the presence or absence of except the target shooting area of the target video camera to
Mark object;
If so, not being labeled to the object to be marked except the target shooting area.
4. method according to claim 1 or 2, which is characterized in that more than one target video camera of the determination, comprising:
The video camera that target shooting area includes the first position of the object to be marked is determined as the target video camera.
5. method according to claim 1 or 2, which is characterized in that the method also includes:
The markup information of the object to be marked is marked in the target position.
6. a kind of monitoring image annotation equipment, which is characterized in that described device includes:
First position obtains module, for obtaining the first position of more than one object to be marked, institute from GIS-Geographic Information System
State the markup information that record in GIS-Geographic Information System has the object to be marked;
The second position obtains module, for determining more than one target video camera, and obtains the target with the target video camera
The second position of the corresponding target acquisition parameters of shooting area and the target video camera in the GIS-Geographic Information System;
Target position determining module, for true according to the first position, the second position and the target acquisition parameters
The fixed object to be marked is described wait mark for marking in the target position of the imaging point of the imaging plane of the target video camera
Infuse the markup information of object.
7. device according to claim 6, which is characterized in that the target position determining module includes:
Imaging plane determines submodule, for determining that the target images according to the second position and the target acquisition parameters
The focus of machine and the position of imaging plane;
Imaging point determines submodule, for determining the line and the imaging plane of the position of the first position and the focus
Intersection point, using the intersection point as the imaging point;
Target position determines submodule, for the position of the intersection point to be determined as the target position.
8. device according to claim 6 or 7, which is characterized in that described device further include:
Labeling module, for marking the markup information of the object to be marked in the target position.
9. a kind of monitoring image mask method, which is characterized in that the described method includes:
Determine the second position of the target video camera in GIS-Geographic Information System;
According to the second position and target acquisition parameters, determine that the target video camera can be clapped with the target acquisition parameters
The region taken the photograph is as the target shooting area;
Determine that more than one target for being located at the target shooting area marks object, the geographical letter from multiple mark objects
Record has the markup information of the multiple mark object in breath system;
The first position of one above target mark object is obtained from the GIS-Geographic Information System;
Determine the target mark object in institute according to the first position, the second position and the target acquisition parameters
The target position for stating the imaging point of the imaging plane of target video camera, for marking the markup information of the target mark object.
10. a kind of monitoring image annotation equipment, which is characterized in that described device includes:
First determining module, for determining the second position of the target video camera in GIS-Geographic Information System;
Second determining module, for determining the target video camera with described according to the second position and target acquisition parameters
The region that target acquisition parameters can take is as the target shooting area;
Third determining module, for determining more than one the target mark for being located at the target shooting area from multiple mark objects
Object is infused, record has the markup information of the multiple mark object in the GIS-Geographic Information System;
Module is obtained, for obtaining the first position of one above target mark object from the GIS-Geographic Information System;
4th determining module, for determining institute according to the first position, the second position and the target acquisition parameters
Target mark object is stated in the target position of the imaging point of the imaging plane of the target video camera, for marking the target mark
Infuse the markup information of object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910133988.1A CN109886201A (en) | 2019-02-22 | 2019-02-22 | Monitoring image mask method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910133988.1A CN109886201A (en) | 2019-02-22 | 2019-02-22 | Monitoring image mask method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109886201A true CN109886201A (en) | 2019-06-14 |
Family
ID=66928875
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910133988.1A Pending CN109886201A (en) | 2019-02-22 | 2019-02-22 | Monitoring image mask method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109886201A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101924927A (en) * | 2010-08-10 | 2010-12-22 | 中兴通讯股份有限公司 | Joint video monitoring method and system thereof |
CN102878982A (en) * | 2011-07-11 | 2013-01-16 | 北京新岸线移动多媒体技术有限公司 | Method for acquiring three-dimensional scene information and system thereof |
CN104284155A (en) * | 2014-10-16 | 2015-01-14 | 浙江宇视科技有限公司 | Video image information labeling method and device |
CN105160327A (en) * | 2015-09-16 | 2015-12-16 | 小米科技有限责任公司 | Building identification method and device |
CN107317999A (en) * | 2017-05-24 | 2017-11-03 | 天津市亚安科技有限公司 | Method and system for realizing automatic identification of geographic name on turntable |
CN108810462A (en) * | 2018-05-29 | 2018-11-13 | 高新兴科技集团股份有限公司 | A kind of camera video interlock method and system based on location information |
US20180350093A1 (en) * | 2017-05-30 | 2018-12-06 | Hand Held Products, Inc. | Systems and methods for determining a location of a user when using an imaging device in an indoor facility |
CN109284404A (en) * | 2018-09-07 | 2019-01-29 | 成都川江信息技术有限公司 | A method of the scene coordinate in real-time video is matched with geography information |
-
2019
- 2019-02-22 CN CN201910133988.1A patent/CN109886201A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101924927A (en) * | 2010-08-10 | 2010-12-22 | 中兴通讯股份有限公司 | Joint video monitoring method and system thereof |
CN102878982A (en) * | 2011-07-11 | 2013-01-16 | 北京新岸线移动多媒体技术有限公司 | Method for acquiring three-dimensional scene information and system thereof |
CN104284155A (en) * | 2014-10-16 | 2015-01-14 | 浙江宇视科技有限公司 | Video image information labeling method and device |
CN105160327A (en) * | 2015-09-16 | 2015-12-16 | 小米科技有限责任公司 | Building identification method and device |
CN107317999A (en) * | 2017-05-24 | 2017-11-03 | 天津市亚安科技有限公司 | Method and system for realizing automatic identification of geographic name on turntable |
US20180350093A1 (en) * | 2017-05-30 | 2018-12-06 | Hand Held Products, Inc. | Systems and methods for determining a location of a user when using an imaging device in an indoor facility |
CN108810462A (en) * | 2018-05-29 | 2018-11-13 | 高新兴科技集团股份有限公司 | A kind of camera video interlock method and system based on location information |
CN109284404A (en) * | 2018-09-07 | 2019-01-29 | 成都川江信息技术有限公司 | A method of the scene coordinate in real-time video is matched with geography information |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5053404B2 (en) | Capture and display digital images based on associated metadata | |
CN102884400B (en) | Messaging device, information processing system and program | |
US20210312641A1 (en) | Determining multiple camera positions from multiple videos | |
US20140185920A1 (en) | Image selection and masking using imported depth information | |
CN101640788B (en) | Method and device for controlling monitoring and monitoring system | |
CN107690673A (en) | Image processing method and device and server | |
CN105809658A (en) | Method and apparatus for setting region of interest | |
CN105825521A (en) | Information processing apparatus and control method thereof | |
Birklbauer et al. | Rendering gigaray light fields | |
US20150130909A1 (en) | Method and electrical device for taking three-dimensional (3d) image and non-transitory computer-readable storage medium for storing the method | |
CN109788201B (en) | Positioning method and device | |
JPWO2014103731A1 (en) | Image processing apparatus and method, and program | |
KR20190120106A (en) | Method for determining representative image of video, and electronic apparatus for processing the method | |
CN108391048A (en) | Data creation method with functions and panoramic shooting system | |
CN105467741B (en) | A kind of panorama photographic method and terminal | |
CN109886201A (en) | Monitoring image mask method and device | |
CN115550563A (en) | Video processing method, video processing device, computer equipment and storage medium | |
US20120301103A1 (en) | Storing a Location within Metadata of Visual Media | |
CN116823936B (en) | Method and system for acquiring longitude and latitude by using camera screen punctuation | |
CN107241612B (en) | Network live broadcast method and device | |
CN111242107A (en) | Method and electronic device for setting virtual object in space | |
JP2006350546A (en) | Information processor, image classification method and information processing system | |
CN108200343A (en) | Image processing method and panoramic shooting system based on full-view image | |
CN113572960B (en) | Video quick tag positioning method for water affair prevention and control | |
JP2010044651A (en) | Method, program and device for correlation and switching display between a plurality of images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190614 |
|
RJ01 | Rejection of invention patent application after publication |