CN110322397B - Image shielding method, server, monitoring system and storage medium - Google Patents
Image shielding method, server, monitoring system and storage medium Download PDFInfo
- Publication number
- CN110322397B CN110322397B CN201910555598.3A CN201910555598A CN110322397B CN 110322397 B CN110322397 B CN 110322397B CN 201910555598 A CN201910555598 A CN 201910555598A CN 110322397 B CN110322397 B CN 110322397B
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- dimensional coordinate
- dimensional
- target
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000012544 monitoring process Methods 0.000 title claims abstract description 20
- 238000006243 chemical reaction Methods 0.000 claims description 19
- 239000013598 vector Substances 0.000 claims description 16
- 230000003287 optical effect Effects 0.000 claims description 7
- 239000000758 substrate Substances 0.000 claims 2
- 230000006870 function Effects 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 3
- 238000005259 measurement Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
Images
Classifications
-
- G06T3/04—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The application discloses an image shielding method, a server, a monitoring system and a storage medium, wherein the image shielding method comprises the following steps: acquiring a target image sent by a camera terminal; wherein the target image comprises at least one feature region; calculating three-dimensional coordinates of the feature area based on space; converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal; and sending the two-dimensional coordinates to the target camera terminal so that the target camera terminal performs shielding processing based on the two-dimensional coordinates. By the mode, the common camera terminal without the identification and positioning functions can shield the characteristic area, and the privacy of passers-by is protected.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image shielding method, a server, a monitoring system, and a storage medium.
Background
At present, due to the improvement of safety protection, cameras for monitoring are more and more, images of the masses in life are widely collected, and personal privacy is excessively exposed. Along with the protection of personal privacy, some key parts, such as a face, need to be shielded when a camera collects the video, and how to shield the monitoring video becomes a problem to be solved urgently.
Disclosure of Invention
In order to solve the problems, the application provides an image shielding method, a server, a monitoring system and a storage medium, which can enable a common camera terminal without an identification and positioning function to shield a characteristic area and protect the privacy of passers-by.
The application adopts a technical scheme that: the method for shielding the image is applied to a server and comprises the following steps: acquiring a target image sent by a camera terminal; wherein the target image comprises at least one feature region; calculating three-dimensional coordinates of the feature area based on space; converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal; and sending the two-dimensional coordinates to the target camera terminal so that the target camera terminal performs shielding processing based on the two-dimensional coordinates.
The camera terminal comprises a first camera and a second camera; the method further comprises the steps of: establishing a first coordinate system based on a first camera plane and a three-dimensional coordinate system based on space; a step of calculating three-dimensional coordinates of a feature region based on space, comprising: and calculating the coordinates of the feature points in the three-dimensional coordinate system according to the coordinates of the feature points in the feature region in the first coordinate system and the parameters of the camera terminal.
The step of calculating coordinates of the feature points in the three-dimensional coordinate system according to the coordinates of the feature points in the feature region in the first coordinate system and parameters of the camera terminal comprises the following steps: the coordinates of the feature points in the three-dimensional coordinate system are calculated by adopting the following formula:wherein x is l 、y l And b is the optical center distance between the first camera and the second camera, f is the focal length of the first camera and the second camera, and d is the parallax of the feature point based on the first camera and the second camera.
Wherein the method further comprises: establishing a coordinate conversion relation of a three-dimensional coordinate system based on space and a two-dimensional coordinate system based on a target camera terminal; the step of converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal includes: and converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal according to the coordinate conversion relation.
The step of establishing the coordinate conversion relation between the three-dimensional coordinate system based on the space and the two-dimensional coordinate system based on the target camera terminal comprises the following steps: acquiring coordinates of three vertexes of a target triangle in a three-dimensional coordinate system; determining a projection triangle of the target triangle in a two-dimensional coordinate system of the target camera terminal; acquiring coordinates of three vertexes of a projection triangle in a two-dimensional coordinate system; and calculating the conversion relation between the three-dimensional coordinate system and the two-dimensional coordinate system based on the three-dimensional coordinate value of the target triangle and the two-dimensional coordinate value of the projection triangle.
The step of calculating a conversion relation between the three-dimensional coordinate system and the two-dimensional coordinate system based on the three-dimensional coordinate value of the target triangle and the two-dimensional coordinate value of the projection triangle comprises the following steps: calculating the conversion relation between the direction vector of the three-dimensional coordinate system and the direction vector of the two-dimensional coordinate system by adopting the following formula;wherein (i, j, k) is the direction vector of the three-dimensional coordinate system, (i ', j ', k ') The direction vectors of the two-dimensional coordinate system are alpha 1, alpha 2, alpha 3, beta 1, beta 2, beta 3, gamma 1, gamma 2 and gamma 3, and the included angles between each coordinate axis of the three-dimensional coordinate system and each coordinate axis of the two-dimensional coordinate system are respectively.
The other technical scheme adopted by the application is as follows: there is provided a server comprising a processor and a memory, the memory for storing program data, the processor for executing the program data to implement a method as described above.
The other technical scheme adopted by the application is as follows: providing a monitoring system, wherein the monitoring system comprises a server, a camera terminal connected with the server and a target camera terminal; the camera terminal is used for acquiring a target image, determining a characteristic area in the target image and sending the target image to the server; the server is used for acquiring a target image, calculating a three-dimensional coordinate of the characteristic region based on space, converting the three-dimensional coordinate into a two-dimensional coordinate based on the target camera terminal, and transmitting the two-dimensional coordinate to the target camera terminal; the target camera terminal is used for acquiring the two-dimensional coordinates and carrying out shielding processing based on the two-dimensional coordinates.
The camera terminal is used for acquiring a target image, determining a face area in the target image by adopting face recognition, and determining a characteristic area covering the face area based on the face area.
The other technical scheme adopted by the application is as follows: there is provided a computer storage medium storing program data which, when executed by a processor, is adapted to carry out a method as described above.
The image shielding method provided by the application comprises the following steps: acquiring a target image sent by a camera terminal; wherein the target image comprises at least one feature region; calculating three-dimensional coordinates of the feature area based on space; converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal; and sending the two-dimensional coordinates to the target camera terminal so that the target camera terminal performs shielding processing based on the two-dimensional coordinates. Through the mode, the common camera terminal without the identification and positioning functions can shield the characteristic area, the privacy areas such as faces can be shielded in the monitoring field, security monitoring can be conducted, and the privacy of passers-by can be protected.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
fig. 1 is a flow chart of an occlusion method of an image provided in an embodiment of the present application;
fig. 2 is a schematic diagram of measurement of a camera terminal according to an embodiment of the present application;
fig. 3 is a schematic flow chart of establishing a coordinate transformation relationship according to an embodiment of the present application;
FIG. 4 is a schematic view of a target triangle projection provided in an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a monitoring system according to an embodiment of the present application;
FIG. 6 is an interactive schematic diagram of a monitoring system provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer storage medium according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not limiting. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms "first," "second," and the like in this application are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, fig. 1 is a flowchart of an image occlusion method according to an embodiment of the present application, where the method includes:
step 11: acquiring a target image sent by a camera terminal; wherein the target image comprises at least one feature region.
The camera terminal adopts the principle of binocular positioning, namely, two cameras are used for positioning. For a characteristic point on an object, two cameras fixed at different positions are used for shooting images of the object, and coordinates of the point on image planes of the two cameras are respectively obtained. As long as the exact relative position of the two cameras is known, the coordinates of the feature point in the coordinate system of the fixed camera can be obtained by using a geometric method, i.e. the position of the feature point is determined.
Alternatively, the special region may be a feature region including a face region. For example, the camera terminal is used for acquiring a target image, determining a face area therein by face recognition, and determining a feature area covering the face area based on the face area.
Alternatively, the feature region is a rectangular region, or a specific region provided corresponding to the outline of the face.
Step 12: the feature region is calculated based on three-dimensional coordinates of the space.
Alternatively, the camera terminal may be a binocular camera terminal, which specifically includes a first camera and a second camera, and a first coordinate system based on a first camera plane and a three-dimensional coordinate system based on a space may be pre-established.
Optionally, as shown in fig. 2, fig. 2 is a schematic view of measurement of a camera terminal provided in an embodiment of the present application.
First, a coordinate system is established: the focal lengths of the first camera and the second camera are f, and the optical centers of the first camera and the second camera are o respectively l And o r The optical center distance is b. X is x a o a y a Is the image coordinate system of the first camera, x b o b y b The xyz is the three-dimensional space coordinate system and is the image coordinate system of the second camera. The P point is a point in three-dimensional space, and the coordinates in the three-dimensional coordinate system are (x, y, z). The P' point is the projection of the P point on the XOY plane, P l The points are P and o l Intersection point of connecting line on XOY, P r The points are P and o r Intersection of the connecting lines on XOY. P (P) l ' is P l Projection of a point on XOZ, P r ' is P r Projection of the point onto XOZ.
Knowing the optical center distance b, focal length f, P l ' Point x a o a y a Coordinates (x) l ,y l ),P r ' Point x a o a y a Coordinates (x) r ,y r ) Coordinates (x, y, z) of the P point.
then, it is further possible to obtain:
therefore, the coordinate value of any one point in the three-dimensional space coordinate system can be obtained by utilizing the formula.
Step 13: and converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal.
It can be understood that the target camera terminal in this embodiment is a common camera terminal, and does not have a face recognition function.
Alternatively, a coordinate conversion relationship between a three-dimensional coordinate system based on a space and a two-dimensional coordinate system based on the target image capturing terminal may be established in advance, and step 13 may convert the three-dimensional coordinate into the two-dimensional coordinate based on the target image capturing terminal based on the coordinate conversion relationship.
Optionally, as shown in fig. 3, fig. 3 is a schematic flow chart for establishing a coordinate transformation relationship according to an embodiment of the present application, where the method includes:
step 31: coordinates of three vertexes of the target triangle in a three-dimensional coordinate system are obtained.
Specifically, the coordinates of each vertex in the three-dimensional coordinate system may be calculated in the above manner, assuming that the coordinate values of the three vertices A, B, C are respectively: a (Xa, ya, za), B (Xb, yb, zb), C (Xc, yc, zc).
Step 32: and determining a projection triangle of the target triangle in a two-dimensional coordinate system of the target camera terminal.
Step 33: coordinates of three vertexes of the projection triangle in a two-dimensional coordinate system are obtained.
As shown in fig. 4, fig. 4 is a schematic view of a target triangle according to an embodiment of the present application.
In the target camera terminal, calibrating the position of the triangle in the image: in an image coordinate system x ' o ' y ' of the target camera terminal, coordinate values of three points a, b and c are measured, wherein the coordinate values of three points a (Xa ', ya ', za '), b (Xb ', yb ', zb '), c (Xc ', yc ', zc ') of the target camera terminal are simplified to a (Xa ', ya ', 0), b (Xb ', yb ', 0) and c (Xc ', yc ', 0) because the coordinate system without x ' o ' y ' has no z axis.
Step 34: and calculating the conversion relation between the three-dimensional coordinate system and the two-dimensional coordinate system based on the three-dimensional coordinate value of the target triangle and the two-dimensional coordinate value of the projection triangle.
Let the direction vector of the xyz three-dimensional coordinate system be (i, j, k). The direction vector of the x 'o' y 'coordinate system is (i', j ', k'). The relation is:
wherein, (i, j, k) is a direction vector of the three-dimensional coordinate system, (i ', j ', k ') is a direction vector of the two-dimensional coordinate system, and α1, α2, α3, β1, β2, β3, γ1, γ2, and γ3 are included angles between each coordinate axis of the three-dimensional coordinate system and each coordinate axis of the two-dimensional coordinate system.
Step 14: and sending the two-dimensional coordinates to the target camera terminal so that the target camera terminal performs shielding processing based on the two-dimensional coordinates.
Unlike the prior art, the image occlusion method provided in this embodiment includes: acquiring a target image sent by a camera terminal; wherein the target image comprises at least one feature region; calculating three-dimensional coordinates of the feature area based on space; converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal; and sending the two-dimensional coordinates to the target camera terminal so that the target camera terminal performs shielding processing based on the two-dimensional coordinates. Through the mode, the common camera terminal without the identification and positioning functions can shield the characteristic area, the privacy areas such as faces can be shielded in the monitoring field, security monitoring can be conducted, and the privacy of passers-by can be protected.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a monitoring system provided in an embodiment of the present application, where the monitoring system 50 includes a server 51, and an image capturing terminal 52 and a target image capturing terminal 53 connected to the server 51.
The camera terminal 52 includes two cameras, and is capable of acquiring three-dimensional coordinates of feature points, and the target camera terminal 53 is a normal camera.
With reference to fig. 6, fig. 6 is an interaction schematic diagram of a monitoring system provided in an embodiment of the present application, and this embodiment is described.
S1: the camera terminal recognizes a privacy part, such as a face, and forms a rectangular frame around the face.
S2: the camera terminal transmits the data back to the server.
S3: the server calculates coordinate values of the face rectangle in a three-dimensional space.
S4: and the server calculates coordinate values of the face rectangle in an image coordinate system of the target camera terminal.
S5: the server transmits the coordinates of privacy occlusion in the target camera terminal to the target camera terminal.
S6: and after the target camera terminal acquires the privacy rectangle coordinates, the target camera terminal performs shielding treatment on the rectangle of the image.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a server provided in an embodiment of the present application, where the server 51 includes a processor 511 and a memory 512, the memory 512 is used for storing program data, and the processor 511 is used for executing the program data to implement the following method:
acquiring a target image sent by a camera terminal; wherein the target image comprises at least one feature region; calculating three-dimensional coordinates of the feature area based on space; converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal; and sending the two-dimensional coordinates to the target camera terminal so that the target camera terminal performs shielding processing based on the two-dimensional coordinates.
Optionally, the processor 511 is further configured to execute program data to implement the following method: establishing a first coordinate system based on a first camera plane and a three-dimensional coordinate system based on space; a step of calculating three-dimensional coordinates of a feature region based on space, comprising: and calculating the coordinates of the feature points in the three-dimensional coordinate system according to the coordinates of the feature points in the feature region in the first coordinate system and the parameters of the camera terminal.
Optionally, the processor 511 is further configured to execute program data to implement the following method: the coordinates of the feature points in the three-dimensional coordinate system are calculated by adopting the following formula:wherein x is l 、y l B is the optical center distance between the first camera and the second camera, f is the focal length of the first camera and the second cameraAnd d is the parallax of the feature points based on the first camera and the second camera.
Optionally, the processor 511 is further configured to execute program data to implement the following method: establishing a coordinate conversion relation of a three-dimensional coordinate system based on space and a two-dimensional coordinate system based on a target camera terminal; the step of converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal includes: and converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal according to the coordinate conversion relation.
Optionally, the processor 511 is further configured to execute program data to implement the following method: acquiring coordinates of three vertexes of a target triangle in a three-dimensional coordinate system; determining a projection triangle of the target triangle in a two-dimensional coordinate system of the target camera terminal; acquiring coordinates of three vertexes of a projection triangle in a two-dimensional coordinate system; and calculating the conversion relation between the three-dimensional coordinate system and the two-dimensional coordinate system based on the three-dimensional coordinate value of the target triangle and the two-dimensional coordinate value of the projection triangle.
Optionally, the processor 511 is further configured to execute program data to implement the following method: calculating the conversion relation between the direction vector of the three-dimensional coordinate system and the direction vector of the two-dimensional coordinate system by adopting the following formula;wherein, (i, j, k) is a direction vector of the three-dimensional coordinate system, (i ', j ', k ') is a direction vector of the two-dimensional coordinate system, and α1, α2, α3, β1, β2, β3, γ1, γ2, and γ3 are included angles between each coordinate axis of the three-dimensional coordinate system and each coordinate axis of the two-dimensional coordinate system.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a computer storage medium provided in an embodiment of the present application, where program data 81 is stored in the computer storage medium 80, and when the program data 81 is executed by a processor, the following method is implemented:
acquiring a target image sent by a camera terminal; wherein the target image comprises at least one feature region; calculating three-dimensional coordinates of the feature area based on space; converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal; and sending the two-dimensional coordinates to the target camera terminal so that the target camera terminal performs shielding processing based on the two-dimensional coordinates.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatuses may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units of the other embodiments described above may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as stand alone products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution, in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the patent application, and all equivalent structures or equivalent processes according to the specification and drawings of the present application, or direct or indirect application in other related technical fields, are included in the scope of the patent protection of the present application.
Claims (9)
1. An image shielding method applied to a server is characterized by comprising the following steps:
acquiring a target image sent by a camera terminal; the target image comprises at least one characteristic area, and the camera terminal comprises a first camera and a second camera;
establishing a first coordinate system based on the first camera plane and a three-dimensional coordinate system based on space;
calculating coordinates of the feature points in the three-dimensional coordinate system according to the coordinates of the feature points in the feature region in the first coordinate system and parameters of the camera terminal;
converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal;
and sending the two-dimensional coordinates to the target camera terminal so that the target camera terminal performs shielding processing based on the two-dimensional coordinates.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the step of calculating coordinates of the feature points in the three-dimensional coordinate system according to the coordinates of the feature points in the feature region in the first coordinate system and parameters of the camera terminal, includes:
the coordinates of the feature points in the three-dimensional coordinate system are calculated using the following formula:
wherein x is l 、y l And b is the optical center distance between the first camera and the second camera, f is the focal length of the first camera and the second camera, and d is the parallax of the feature point based on the first camera and the second camera.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the method further comprises the steps of:
establishing a coordinate conversion relation of a three-dimensional coordinate system based on space and a two-dimensional coordinate system based on a target camera terminal;
the step of converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal comprises the following steps:
and converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal according to the coordinate conversion relation.
4. The method of claim 3, wherein the step of,
the step of establishing the coordinate conversion relation of the three-dimensional coordinate system based on the space and the two-dimensional coordinate system based on the target camera terminal comprises the following steps:
acquiring coordinates of three vertexes of a target triangle in the three-dimensional coordinate system;
determining a projection triangle of the target triangle in a two-dimensional coordinate system of the target camera terminal;
acquiring coordinates of three vertexes of the projection triangle in the two-dimensional coordinate system;
and calculating the conversion relation between the three-dimensional coordinate system and the two-dimensional coordinate system based on the three-dimensional coordinate value of the target triangle and the two-dimensional coordinate value of the projection triangle.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the step of calculating a conversion relationship between the three-dimensional coordinate system and the two-dimensional coordinate system based on the three-dimensional coordinate value of the target triangle and the two-dimensional coordinate value of the projection triangle includes:
calculating the conversion relation between the direction vector of the three-dimensional coordinate system and the direction vector of the two-dimensional coordinate system by adopting the following formula;
wherein, (i, j, k) is a direction vector of the three-dimensional coordinate system, (i ', j ', k ') is a direction vector of the two-dimensional coordinate system, and α1, α2, α3, β1, β2, β3, γ1, γ2, and γ3 are included angles between each coordinate axis of the three-dimensional coordinate system and each coordinate axis of the two-dimensional coordinate system.
6. A server comprising a processor and a memory, the memory for storing program data, the processor for executing the program data to implement the method of any of claims 1-5.
7. The monitoring system is characterized by comprising a server, and a camera terminal and a target camera terminal which are connected with the server;
the camera terminal is used for acquiring a target image, determining a characteristic area in the target image and sending the target image to the server; the camera terminal comprises a first camera and a second camera;
the server is used for acquiring the target image and establishing a first coordinate system based on the first camera plane and a three-dimensional coordinate system based on space; according to the coordinates of the feature points in the first coordinate system and the parameters of the camera terminal in the feature area, calculating the coordinates of the feature points in the three-dimensional coordinate system, converting the three-dimensional coordinates into two-dimensional coordinates based on the target camera terminal, and sending the two-dimensional coordinates to the target camera terminal;
the target camera terminal is used for acquiring the two-dimensional coordinates and carrying out shielding processing based on the two-dimensional coordinates.
8. The monitoring system of claim 7, wherein,
the camera terminal is used for acquiring a target image, determining a face area in the target image by adopting face recognition, and determining a characteristic area covering the face area based on the face area.
9. A computer storage medium storing program data which, when executed by a processor, is adapted to carry out the method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910555598.3A CN110322397B (en) | 2019-06-25 | 2019-06-25 | Image shielding method, server, monitoring system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910555598.3A CN110322397B (en) | 2019-06-25 | 2019-06-25 | Image shielding method, server, monitoring system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110322397A CN110322397A (en) | 2019-10-11 |
CN110322397B true CN110322397B (en) | 2023-05-12 |
Family
ID=68120251
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910555598.3A Active CN110322397B (en) | 2019-06-25 | 2019-06-25 | Image shielding method, server, monitoring system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110322397B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111159751A (en) * | 2019-12-03 | 2020-05-15 | 深圳博脑医疗科技有限公司 | Privacy-removing processing method and device for three-dimensional image and terminal equipment |
CN111582240B (en) * | 2020-05-29 | 2023-08-08 | 上海依图网络科技有限公司 | Method, device, equipment and medium for identifying number of objects |
CN112614228B (en) * | 2020-12-17 | 2023-09-05 | 北京达佳互联信息技术有限公司 | Method, device, electronic equipment and storage medium for simplifying three-dimensional grid |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007129336A (en) * | 2005-11-01 | 2007-05-24 | Mitsubishi Electric Corp | Monitoring system and monitoring device |
CN103270752A (en) * | 2010-10-21 | 2013-08-28 | 传感电子有限责任公司 | Method and system for converting privacy zone planar images to their corresponding pan/tilt coordinates |
CN105898208A (en) * | 2014-05-07 | 2016-08-24 | 韩华泰科株式会社 | A surveillance system, a surveillance camera and an image processing method |
CN107094234A (en) * | 2017-06-29 | 2017-08-25 | 浙江宇视科技有限公司 | A kind of shooting area occlusion method and device applied to dollying terminal |
CN107820041A (en) * | 2016-09-13 | 2018-03-20 | 华为数字技术(苏州)有限公司 | Privacy screen method and device |
CN107945103A (en) * | 2017-11-14 | 2018-04-20 | 上海歌尔泰克机器人有限公司 | The privacy screen method, apparatus and unmanned plane of unmanned plane image |
-
2019
- 2019-06-25 CN CN201910555598.3A patent/CN110322397B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007129336A (en) * | 2005-11-01 | 2007-05-24 | Mitsubishi Electric Corp | Monitoring system and monitoring device |
CN103270752A (en) * | 2010-10-21 | 2013-08-28 | 传感电子有限责任公司 | Method and system for converting privacy zone planar images to their corresponding pan/tilt coordinates |
CN105898208A (en) * | 2014-05-07 | 2016-08-24 | 韩华泰科株式会社 | A surveillance system, a surveillance camera and an image processing method |
CN107820041A (en) * | 2016-09-13 | 2018-03-20 | 华为数字技术(苏州)有限公司 | Privacy screen method and device |
CN107094234A (en) * | 2017-06-29 | 2017-08-25 | 浙江宇视科技有限公司 | A kind of shooting area occlusion method and device applied to dollying terminal |
CN107945103A (en) * | 2017-11-14 | 2018-04-20 | 上海歌尔泰克机器人有限公司 | The privacy screen method, apparatus and unmanned plane of unmanned plane image |
Also Published As
Publication number | Publication date |
---|---|
CN110322397A (en) | 2019-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110322397B (en) | Image shielding method, server, monitoring system and storage medium | |
US11830141B2 (en) | Systems and methods for 3D facial modeling | |
JP6564537B1 (en) | 3D reconstruction method and apparatus using monocular 3D scanning system | |
US11010925B2 (en) | Methods and computer program products for calibrating stereo imaging systems by using a planar mirror | |
EP3046078B1 (en) | Image registration method and apparatus | |
TWI554976B (en) | Surveillance systems and image processing methods thereof | |
JP7018566B2 (en) | Image pickup device, image processing method and program | |
WO2020063987A1 (en) | Three-dimensional scanning method and apparatus and storage medium and processor | |
CN109495733B (en) | Three-dimensional image reconstruction method, device and non-transitory computer readable storage medium thereof | |
US20190156511A1 (en) | Region of interest image generating device | |
CN113538587A (en) | Camera coordinate transformation method, terminal and storage medium | |
CN112837207A (en) | Panoramic depth measuring method, four-eye fisheye camera and binocular fisheye camera | |
Hassan et al. | 3D distance measurement accuracy on low-cost stereo camera | |
EP3189493B1 (en) | Depth map based perspective correction in digital photos | |
CN210986289U (en) | Four-eye fisheye camera and binocular fisheye camera | |
JP6306996B2 (en) | VIDEO DATA PROCESSING METHOD, VIDEO DATA PROCESSING DEVICE, AND VIDEO DATA PROCESSING PROGRAM | |
KR20220121533A (en) | Method and device for restoring image obtained from array camera | |
KR101725166B1 (en) | 3D image reconstitution method using 2D images and device for the same | |
CN110800020A (en) | Image information acquisition method, image processing equipment and computer storage medium | |
CN110470216A (en) | A kind of three-lens high-precision vision measurement method and device | |
CN106897708B (en) | Three-dimensional face detection method and device | |
EP2866446B1 (en) | Method and multi-camera portable device for producing stereo images | |
KR20200057929A (en) | Method for rectification of stereo images captured by calibrated cameras and computer program | |
Kanbara et al. | 3D scene reconstruction from reflection images in a spherical mirror | |
Fujiyama et al. | Multiple view geometries for mirrors and cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |