EP2741237B1 - Verfahren, vorrichtung und system zur implementierung einer videosperre - Google Patents

Verfahren, vorrichtung und system zur implementierung einer videosperre Download PDF

Info

Publication number
EP2741237B1
EP2741237B1 EP12872312.9A EP12872312A EP2741237B1 EP 2741237 B1 EP2741237 B1 EP 2741237B1 EP 12872312 A EP12872312 A EP 12872312A EP 2741237 B1 EP2741237 B1 EP 2741237B1
Authority
EP
European Patent Office
Prior art keywords
video data
masked
masked area
video
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP12872312.9A
Other languages
English (en)
French (fr)
Other versions
EP2741237A4 (de
EP2741237A1 (de
Inventor
Duanling SONG
Feng Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP2741237A1 publication Critical patent/EP2741237A1/de
Publication of EP2741237A4 publication Critical patent/EP2741237A4/de
Application granted granted Critical
Publication of EP2741237B1 publication Critical patent/EP2741237B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19686Interfaces masking personal details for privacy, e.g. blurring faces, vehicle license plates

Definitions

  • Embodiments of the present invention relate to the field of video surveillance, and in particular, to a method, an apparatus, and a system for implementing video mask.
  • encryption processing is performed for image data of a masked part in a video, and the processed video is sent to a monitoring terminal.
  • a user with permission is capable of decrypting the image data of the masked part in the received video to see the complete video, while a user with no permission cannot see the image of the masked part.
  • a terminal of the user with no permission is also capable of acquiring the image data of the masked part, and if an abnormal means is used to decrypt the data of the part, the image of the masked part can be seen. This causes a security risk.
  • the document, WO 2006070249A1 relates to a video surveillance system which addresses the issue of privacy rights and scrambles regions of interest in a scene in a video scene to protect the privacy of human faces and objects captured by the system.
  • the video surveillance system is configured to identify persons and or objects captured in a region of interest of a video scene by various techniques, such as detecting changes in a scene or by face detection.
  • the document, US 20100149330A1 relates to a system and method for operator-side privacy zone masking of surveillance.
  • the system includes a video surveillance camera equipped with a coordinate engine for determining coordinates of a current field of view of the surveillance camera; and a frame encoder for embedding the determined coordinates with video frames of the current field of view.
  • Embodiments of the present invention provide a method, an apparatus, and a system for implementing video mask, so as to solve a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • the peripheral unit 110 is configured to collect video data and send the collected video data to the monitoring platform through the transmission network.
  • the peripheral unit 110 may generate, according to set description information of a masked area, non-masked video data corresponding to a non-masked area and masked video data corresponding to the masked area and separately transmit them to the monitoring platform.
  • Presentation forms of hardware of the peripheral unit 110 may be all types of camera devices, for example, webcams such as a dome camera, a box camera, and a semi-dome camera, and for another example, an analog camera and an encoder.
  • the monitoring platform 120 is configured to receive the masked video data and the non-masked video data that are sent by the peripheral unit 110, or obtain masked video data and non-masked video data by separating complete video data received from the peripheral unit 110, and send corresponding video data to the monitoring terminal 130 according to permission of a user of the monitoring terminal. For a user that has permission to acquire the masked video data, the monitoring platform 120 may send the masked video data and the non-masked video data to the monitoring terminal for merging and playing; alternatively, the monitoring platform 120 may merge the masked video data and the non-masked video data and send them to the monitoring terminal for playing.
  • the monitoring terminal 130 is configured to receive the video data sent by the monitoring platform, and if the received video data includes the non-masked video data and the masked video data, further configured to merge and play the masked video data and the non-masked video data.
  • FIG. 2 is a schematic flowchart of a method for implementing video mask according to a first embodiment of the present invention.
  • Step 210 Receive a video request sent by a first monitoring terminal, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the masked video data and the non-masked video data may specifically be encoded by using an H.264 format.
  • the device identifier is used to uniquely identify the peripheral unit, and specifically, it may include an identifier of a camera of the peripheral unit, and may further include an identifier of a cloud mirror of the peripheral unit.
  • Step 220 Determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the masked area may specifically include one or more areas, where the area may be rectangular, circular, polygonal, and the like. If one area is included, the masked video data corresponding to the masked area may specifically include one channel of video data. If multiple areas are included, the masked video data corresponding to the masked area may specifically include one channel of video data, or may include multiple channels of video data, for example, each area included in the masked area corresponds to one channel of video data.
  • description information of the masked area may be used to describe the masked area.
  • the description information of the masked area specifically includes a coordinate of the masked area.
  • the description information of the masked area may include coordinates of at least three vertexes of the rectangle, or may only include a coordinate of one vertex of the rectangle and a width and a height of the rectangle, for example (x, y, w, h), where x is the horizontal coordinate of the upper left vertex, y is the vertical coordinate of the upper left vertex, w is the width, and h is the height.
  • overall permission control may be performed for the masked video data, that is, permission to access the masked video data is classified into two permission levels: having access permission and having no access permission. In this case, it can be directly determined whether a user has permission to access the masked video data.
  • the first masked video data is the masked video data
  • the first masked area is the masked area (that is, the whole area of the masked area is included).
  • area-based permission control may also be performed for the masked video data. Respective permission is set for different areas, that is, video data that corresponds to different areas may correspond to different permission.
  • the masked area includes three areas, area 1 and area 2 correspond to permission A, and area 3 corresponds to permission B.
  • the masked area includes three areas, area 1 corresponds to permission A, area 2 corresponds to permission B, and area 3 corresponds to permission C. In this case, it needs to determine whether the user has permission to access masked video data that corresponds to a specific area.
  • the permission may be determined according to a password. For example, if a password that is received from the first monitoring terminal and used to acquire the first masked video data is determined to be correct (that is, a user inputs a correct password), it is determined that the user has the permission to acquire the first masked video data.
  • the permission may be further determined according to a user identifier of a user of the first monitoring terminal.
  • a user identifier may be preconfigured, and if the user identifier matches the authorized user identifier, it is determined that the user has the permission to acquire the first masked video data; an authorized account type may also be preconfigured, and if an account type corresponding to the user identifier matches the authorized account type, it is determined that the user has the permission to acquire the first masked video data.
  • the user identifier may be acquired during login of the user performed by using the monitoring terminal.
  • the video request received in step 210 may carry the user identifier, and in this case, the user identifier carried in the video request may be acquired.
  • step 230A If a determined result is yes, perform step 230A. If the determined result is no, perform step 230B.
  • Step 230A Acquire the first masked video data and the non-masked video data; and send the first masked video data and the non-masked video data to the first monitoring terminal, so that the first monitoring terminal merges and plays the first masked video data and the non-masked video data, or merge the first masked video data and the non-masked video data and send the merged video data to the first monitoring terminal.
  • a data type of the masked video data may also be sent to the first monitoring terminal, so that the first monitoring terminal identifies the masked video data from the received video data.
  • the data type may specifically be included in an acquiring address (for example, a URL) that is sent to the first monitoring terminal and used to acquire the masked video data, or the data type may be included in a message that is sent to the first monitoring terminal and used to carry the acquiring address, or the data type may be sent in a process of establishing a media channel between the first monitoring terminal and a monitoring platform, where the media channel is used to transmit the masked video data.
  • the method may further include: sending description information of the first masked area to the first monitoring terminal, so that the first monitoring terminal merges and plays, according to the description information of the first masked area, the first masked video data and the non-masked video data that are received in step 230A.
  • the description information may be included in the acquiring address (for example, a URL) that is sent to the first monitoring terminal and used to acquire the masked video data, or the description information may be included in the message that is sent to the first monitoring terminal and used to carry the acquiring address, or the description information may be sent in the process of establishing the media channel used to transmit the masked video data.
  • Step 230B Acquire the non-masked video data and send it to the first monitoring terminal.
  • step 230A and step 230B are as follows:
  • the acquiring address (for example, a URL) sent to the first monitoring terminal carries a data type.
  • the data type is used to indicate that the video data that can be acquired according to the acquiring address is the non-masked video data or the masked video data. Examples of a format of a URL (universal resource locator, Universal Resource Locator) that carries the data type are as follows:
  • description information for example, a coordinate of a masked area
  • description information of the masked area corresponding to the masked video data may be further carried in the acquiring address of the masked video data.
  • Examples of a format of a URL that carries the data type and the description information of the masked area are as follows:
  • the monitoring platform may further send the data type and/or the description information of the masked area to the first monitoring terminal by message exchange.
  • the data type and/or the description information of the masked area is included in a message body of an XML structure in a message that carries the URL, as shown in the following:
  • a user-defined structure body in an RTSP ANNOUNCE message may also be used to carry the data type and/or the description information of the masked area in the process of establishing the media channel between the first monitoring terminal and the monitoring platform.
  • An example is shown as follows:
  • step 230A the acquiring the first masked video data and the non-masked video data, merging the first masked video data and the non-masked video data, and sending the merged video data to the first monitoring terminal specifically includes: generating an acquiring address (for example, a URL) used to acquire the merged video data and sending it to the first monitoring terminal, receiving a request that is sent by the first monitoring terminal and includes the acquiring address, establishing, with the first monitoring terminal according to the acquiring address, a media channel used to send the merged video data, acquiring and merging the first masked video data and the non-masked video data, and sending the merged video data to the first monitoring terminal through the media channel.
  • an acquiring address for example, a URL
  • Step 230B may include: generating an acquiring address of the non-masked video data and sending it to the first monitoring terminal, receiving a request that is sent by the first monitoring terminal and includes the acquiring address, establishing, with the first monitoring terminal according to the acquiring address, a media channel used to send the non-masked video data, acquiring the non-masked video data according to the acquiring address of the non-masked video data and sending the non-masked video data through the media channel.
  • a CU (Client Unit, client unit) in this implementation manner is client software installed on a monitoring terminal and provides monitoring personnel with functions such as real-time video surveillance, video query and playback, and a cloud mirror operation.
  • a monitoring platform includes an SCU (Service Control Unit, service control unit) and an MU (Media Unit, media unit).
  • SCU Service Control Unit, service control unit
  • MU Media Unit, media unit
  • the SCU and the MU may be implemented in a same universal server or dedicated server, or may be separately implemented in different universal servers or dedicated servers.
  • Step 301 A CU sends a video request to an SCU of a monitoring platform, where the video request includes a device identifier and is used to request video data of a peripheral unit identified by the device identifier, and the video data includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • Step 302 The SCU determines whether a user of the CU has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • step 302 is the same as that of step 220, and therefore no further details are provided herein.
  • steps 303A-312A are performed. In this implementation manner, it is assumed that the first masked video data includes one channel of video data.
  • steps 303B-308B are performed.
  • Steps 303A-306A The SCU requests a URL of the first masked video data and a URL of the non-masked video data from an MU, and the MU generates the URL of the first masked video data and the URL of the non-masked video data and returns them to the SCU.
  • Step 307A The SCU returns the URL of the first masked video data and the URL of the non-masked video data to the CU.
  • Steps 308A-309A The CU requests the first masked video data from the MU according to the URL of the first masked video data, establishes, with the MU, a media channel used to transmit the first masked video data, and receives, through the media channel, the first masked video data sent by the MU.
  • Steps 310A-311A The CU requests the non-masked video data from the MU according to the URL of the non-masked video data, establishes, with the MU, a media channel used to transmit the non-masked video data, and receives, through the media channel, the non-masked video data sent by the MU.
  • Step 312A The CU merges and plays the first masked video data and the non-masked video data.
  • Steps 303B-304B The SCU requests a URL of the non-masked video data from the MU, and the MU generates the URL of the non-masked video data and returns it to the SCU.
  • Step 305B The SCU returns the URL of the non-masked video data to the CU.
  • Steps 306B-307B The CU requests the non-masked video data from the MU according to the URL of the non-masked video data, establishes, with the MU, a media channel used to transmit the non-masked video data, and receives, through the media channel, the non-masked video data sent by the MU.
  • Step 308B The CU plays the non-masked video data.
  • a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • the first embodiment of the present invention not only can be used in a real-time video surveillance scenario, but also can be used in a video view scenario (for example, video playback and video downloading). If the first embodiment is used in the video view scenario, the acquiring non-masked video data in steps 230A and 230B is specifically reading the non-masked video data from a non-masked video file, and the acquiring masked video data in step 230A is specifically reading masked video data from a masked video file.
  • step 210 the following operations are performed:
  • the establishing an association between the masked video file and the non-masked video file specifically includes: recording a non-masked video index and a masked video index, and establishing an association between the non-masked video index and the masked video index, where the non-masked video index includes a device identifier of the peripheral unit, video start time and end time, indication information of the non-masked video data, and an identifier of the non-masked video file (for example, a storage address of the non-masked video file, which may specifically be an absolute path of the non-masked video file), and the indication information of the non-masked video data is used to indicate that the non-masked video index is an index of the non-masked video file; and the masked video index includes indication information of the masked video data and an identifier of the masked video file (for example, a storage address of the masked video file, which may specifically be an absolute path of the masked video file), and the indication information of the masked video data is
  • both the non-masked video index and the masked video index may include indication information of a non-independent index, where the indication information of the non-independent index is used to indicate an index associated with the index.
  • the indication information of the non-independent index of the non-masked video index is used to indicate a masked video index associated with the non-masked video index.
  • the non-masked video index and/or the masked video index may further include description information of a masked area, or information (for example, a storage address of the description information of the masked area) used to acquire the description information of the masked area.
  • the establishing an association between the non-masked video index and the masked video index may specifically include recording an identifier (for example, an index number) of the masked video index into the non-masked video index, or may further include recording an identifier (for example, an index number) of the non-masked video index into the masked video index, or may further include recording an association between the identifier of the masked video index and the identifier of the non-masked video index. It should be noted that if the masked video data includes multiple channels of video data, a masked video index may be established for each channel of video data, and an association is established between the non-masked video index and each masked video index.
  • description information of a masked area corresponding to the video file, or information used to acquire the description information of the masked area corresponding to the video file is recorded in each masked video index.
  • the video request sent in step 210 may further include view time.
  • the acquiring the non-masked video data is specifically acquiring video data corresponding to the view time from the non-masked video file, and may specifically include: acquiring the non-masked video index according to the identifier of the peripheral unit, the view time, and the indication information of the non-masked video data, acquiring the non-masked video file according to the identifier of the non-masked video file in the non-masked video index, and acquiring the non-masked video data corresponding to the view time from the non-masked video file.
  • the acquiring the masked video data is specifically acquiring, according to the association between the masked video file and the non-masked video file, one or more video files that are associated with the non-masked video file and correspond to the first masked area and acquiring the video data corresponding to the view time from the one or more video files corresponding to the first masked area, and specifically includes: acquiring, according to the association between the non-masked video index and the masked video index (for example, according to the identifier of the masked video index in the non-masked video index), the masked video index associated with the non-masked video index, acquiring, according to the identifier of the masked video file included in the masked video index, one or more video files corresponding to the first masked area, and acquiring the video data corresponding to the view time from the one or more video files corresponding to the first masked area.
  • the masked video index associated with the non-masked video index may be further determined according to the indication information of the non-independent index in the non-masked video index, so as to improve efficiency of the monitoring platform in retrieving the masked video index.
  • an acquiring address used to acquire the non-masked video data may be generated according to the non-masked video index and sent to a first monitoring terminal, a request that is sent by the first monitoring terminal and includes the acquiring address of the non-masked video data is received, a media channel used to send the non-masked video data is established with the first monitoring terminal according to the acquiring address of the non-masked video data, the non-masked video data is acquired according to the acquiring address of the non-masked video data, and the non-masked video data is sent through the media channel. For example, as shown in FIG.
  • the SCU of the monitoring platform acquires the non-masked video index after receiving the video request, requests, from the MU according to the non-masked video index, a URL used to acquire the non-masked video data corresponding to the non-masked video index, and sends the URL to the CU.
  • the MU receives the request that is sent by the CU and includes the URL, establishes, with the CU according to the URL, a media channel used to send the non-masked video data, reads the non-masked video data in the video file according to the URL, and sends the non-masked video data to the CU through the media channel.
  • a process of sending the masked video data after the masked video index is acquired is similar to a process of sending the non-masked video data after the non-masked video index is acquired, and therefore no further details are provided herein.
  • the method may further include: sending description information of the first masked area to the first monitoring terminal, so that the first monitoring terminal merges and plays, according to the description information of the first masked area, the first masked video data and the non-masked video data that are received in step 230A.
  • the method may specifically include: acquiring the non-masked video index or description information of a masked area that is included in a masked video index corresponding to the first masked video data, or acquiring the description information of the first masked area according to the non-masked video index or information that is included in the masked video index and used to acquire the description information of the first masked area, and sending the acquired description information of the first masked area to the first monitoring terminal.
  • the description information of the first masked area may be carried in a message that is sent to the first monitoring terminal and carries an acquiring address of the first masked video data.
  • the method further includes receiving a masked area setting request sent by a second monitoring terminal, where the masked area setting request includes a device identifier of the peripheral unit and the description information of the masked area.
  • the description information of the masked area may be sent to the peripheral unit, and the non-masked video data and the masked video data that are sent by the peripheral unit and generated according to the description information of the masked area are received; or the masked video data and the non-masked video data may be obtained by separating, according to the description information of the masked area, complete video data received from the peripheral unit.
  • the masked video data and the non-masked video data may be sent to the first monitoring terminal and be merged and played by the first monitoring terminal, or the masked video data and the non-masked video data may be merged and then sent to the first monitoring terminal.
  • first monitoring terminal and the second monitoring terminal may be a same monitoring terminal.
  • an entity generating the non-masked video data and the masked video data may be a peripheral unit or a monitoring platform
  • an entity merging the non-masked video data and the masked video data may be a monitoring platform or a monitoring terminal (that is, the first monitoring terminal in the first embodiment of the present invention).
  • a first exemplary implementation manner is as follows: As shown in FIG. 4 , the peripheral unit generates the non-masked video data and the masked video data, the monitoring platform separately sends the monitoring terminal (for example, the first monitoring terminal in this embodiment) the non-masked video data and the masked video data (for example, the first masked video data in this embodiment) that a user has permission to acquire, and the monitoring terminal merges and plays the received video data.
  • the monitoring terminal for example, the first monitoring terminal in this embodiment
  • the masked video data for example, the first masked video data in this embodiment
  • Step 401 A second monitoring terminal sends a masked area setting request to a monitoring platform, where the masked area setting request includes a device identifier and description information of a masked area.
  • the masked area may specifically include one or more areas, where the area may be rectangular, circular, polygonal, and the like.
  • the description information of the masked area specifically includes a coordinate of the masked area.
  • the description information of the masked area may include coordinates of at least three vertexes of the rectangle, or may only include a coordinate of one vertex of the rectangle and a width and a height of the rectangle, for example (x, y, w, h), where x is the horizontal coordinate of the upper left vertex, y is the vertical coordinate of the upper left vertex, w is the width, and h is the height.
  • Step 402 The monitoring platform sends the masked area setting request to a peripheral unit identified by the device identifier, where the masked area setting request includes the description information of the masked area.
  • Step 403 The peripheral unit encodes a captured video picture to generate masked video data and non-masked video data.
  • the peripheral unit encodes the captured video picture into the non-masked video data corresponding to a non-masked area and the masked video data corresponding to the masked area. If the masked area includes one area, a video picture corresponding to the masked area may be encoded into one channel of video data, that is, the masked video data includes one channel of video data.
  • video pictures corresponding to the multiple areas included in the masked area may be encoded into one channel of video data, that is, the masked video data includes one channel of video data; or video pictures corresponding to the multiple areas included in the masked area may be encoded into one channel of video data each, that is, the masked video data includes multiple channels of video data and each area corresponds to one channel of video data; or video pictures corresponding to areas with same permission among the multiple areas included in the masked area may be encoded into one channel of video data, that is, the areas corresponding to the same permission correspond to a same channel of video data, for example, if the masked area includes three areas, area 1 and area 2 correspond to same permission, and area 3 corresponds to another permission, video pictures corresponding to area 1 and area 2 are encoded into a same channel of video data, and a video picture corresponding to area 3 is encoded into another channel of video data.
  • the video picture corresponding to the masked area may be directly encoded into the masked video data, that is, a video data frame of the masked video data includes only pixel data of the video picture corresponding to the masked area; or a video picture in the whole captured video picture may be encoded after filling the video picture by using a set pixel value so as to generate the masked video data, where the video picture corresponds to the non-masked area, that is, a video data frame of the non-masked video data includes both pixel data of the video picture corresponding to the masked area and filled pixel data.
  • Encoding formats include but are not limited to H.264, MPEG4, and MJPEG, and the like.
  • the video picture corresponding to the non-masked area may be directly encoded into the non-masked video data, or the video picture in the whole captured video picture may be encoded after filling the video picture by using a set pixel value so as to generate the non-masked video data, where the video picture corresponds to the masked area, and the set pixel value is preferably RGB (0, 0, 0).
  • timestamps of video data frames corresponding to a same complete video picture are kept completely consistent in the masked video data and the non-masked video data.
  • the description information of the masked area is sent by the monitoring platform to the peripheral unit.
  • the description information of the masked area may be preset on the peripheral unit.
  • Step 404 Send the generated masked video data and non-masked video data to the monitoring platform.
  • the peripheral unit may further send a data type of the masked video data to the monitoring platform, so that the monitoring platform identifies the masked video data from received video data.
  • the data type may be specifically included in an acquiring address (for example, a URL) that is sent to the monitoring platform and used to acquire the masked video data (where the monitoring platform may acquire the masked video data from the peripheral unit by using the acquiring address), or the data type may be included in a message that is sent to the monitoring platform and used to carry the acquiring address, or the data type may be sent in a process of establishing a media channel between the monitoring platform and the peripheral unit and used to transmit the masked video data.
  • an acquiring address for example, a URL
  • the data type may be included in a message that is sent to the monitoring platform and used to carry the acquiring address, or the data type may be sent in a process of establishing a media channel between the monitoring platform and the peripheral unit and used to transmit the masked video data.
  • Step 405 A first monitoring terminal sends a video request to the monitoring platform, where the video request includes the device identifier of the peripheral unit.
  • Step 406 Determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • steps 407A-409A are performed.
  • steps 407B-408B are performed.
  • Step 407A The monitoring platform sends description information of the first masked area to the first monitoring terminal.
  • Step 408A The monitoring platform sends the first masked video data and the non-masked video data to the first monitoring terminal.
  • Step 409A The first monitoring terminal merges and plays the received first masked video data and non-masked video data.
  • the received first masked video data and non-masked video data are merged and played according to the description information of the first masked area.
  • the first masked video data includes one channel of video data
  • the first masked video data is decoded to obtain a masked video data frame
  • the non-masked video data is decoded to obtain a non-masked video data frame
  • pixel data in the masked video data frame is extracted
  • the extracted pixel data is added, according to the description information of the first masked area, to a pixel area in a non-masked video data frame that has a same timestamp as the masked video data frame so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and the complete video data frame is played.
  • the extracting the pixel data in the masked video data frame is specifically extracting all pixel data in the masked video data frame.
  • a video picture in the whole captured video picture is encoded after filling the video picture by using a set pixel value so as to generate the masked video data during the encoding, where the video picture corresponds to the non-masked area, that is, a video data frame of the non-masked video data includes both the pixel data of the video picture corresponding to the masked area and the filled pixel data, pixel data of the pixel area in the masked video data frame is extracted according to the description information of the first masked area, where the pixel area corresponds to the first masked area.
  • the first masked video data includes multiple channels of video data
  • each channel of video data in the first masked video data is decoded to obtain a masked video data frame of the channel of video data
  • the non-masked video data is decoded to obtain a non-masked video data frame
  • pixel data in masked video data frames of all channels of video data is extracted, where the masked video data frames have a same timestamp
  • the extracted pixel data is added to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and the complete video data frame is played.
  • both the non-masked video data and the first masked video data are transmitted to the first monitoring terminal through the RTP protocol.
  • the first monitoring terminal receives a non-masked video data code stream and a first masked video data code stream that are encapsulated through the RTP protocol, parses the non-masked video data code stream and the first masked video data code stream to obtain the non-masked video data and the first masked video data respectively, and separately caches the non-masked video data and the first masked video data in a decoder buffer area.
  • Frame data is synchronized according to a synchronization timestamp, that is, frame data that has a same timestamp is separately extracted from the non-masked video data and the first masked video data.
  • the extracted frame data of the non-masked video data and the extracted frame data of the first masked video data that have the same timestamp are separately encoded to generate corresponding YUV data.
  • YUV data of the first masked video data and YUV data of the non-masked video data are merged according to the description information of the first masked area, and the merged YUV data is rendered and played.
  • a request for acquiring video data is sent to the peripheral unit after step 404.
  • video data that a user of the first monitoring terminal has permission to acquire may be requested from the peripheral unit according to the determined result in step 406. For example, if the user only has permission to acquire the non-masked video data, only the non-masked video data is requested; and if the user has permission to acquire the non-masked video data and the first masked video data, the non-masked video data and the first masked video data is requested.
  • the peripheral unit After receiving the request, the peripheral unit generates the requested video data and returns it to the monitoring platform.
  • a method used by the peripheral unit to generate the non-masked video data and the first masked video data is the same as that in step 403, and therefore no further details are provided.
  • Step 407B The monitoring platform forwards the non-masked video data to the first monitoring terminal.
  • Step 408B The first monitoring terminal plays the received non-masked video data.
  • a second exemplary implementation manner 2 is as follows: As shown in FIG. 7 , the peripheral unit generates the non-masked video data and the masked video data, and the monitoring platform merges the non-masked video data and the masked video data (that is, the first masked video data) that a user has permission to acquire and then sends them to the monitoring terminal.
  • Steps 501-506 are the same as steps 401-406, and therefore no further details are provided.
  • steps 507A-510A are performed.
  • steps 507B-508B are performed.
  • Step 507A is the same as step 407A.
  • Step 508A The monitoring platform merges the non-masked video data and the first masked video data.
  • the first masked video data and the non-masked video data are merged according to the description information of the masked area received in step 501.
  • the first masked video data includes one channel of video data
  • the first masked video data is decoded to obtain a masked video data frame
  • the non-masked video data is decoded to obtain a non-masked video data frame
  • pixel data in the masked video data frame is extracted
  • the extracted pixel data is added to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the masked area
  • the complete video data frame is encoded to obtain the merged video data.
  • the extracting the pixel data in the masked video data frame is specifically extracting all pixel data in the masked video data frame.
  • a video picture in the whole captured video picture is encoded after filling the video picture by using a set pixel value so as to generate the first masked video data during the encoding, where the video picture corresponds to the non-masked area, that is, a video data frame of the non-masked video data includes both the pixel data of the video picture corresponding to the masked area and the filled pixel data, pixel data of a pixel area in the masked video data frame is extracted, where the pixel area corresponds to the first masked area.
  • the first masked video data includes multiple channels of video data
  • each channel of video data in the first masked video data is decoded to obtain a masked video data frame of the channel of video data
  • the non-masked video data is decoded to obtain a non-masked video data frame
  • pixel data in masked video data frames of all channels of video data is extracted, where the masked video data frames have a same timestamp
  • the extracted pixel data is added to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the masked area
  • the complete video data frame is encoded to obtain the merged video data.
  • both the non-masked video data and the first masked video data are transmitted to the monitoring platform through the RTP protocol.
  • Processing after the monitoring platform receives a non-masked video data code stream and a first masked video data code stream that are encapsulated through the RTP protocol is similar to the processing after the first monitoring terminal receives a code stream in step 409A.
  • a difference lies only in that the first monitoring terminal renders and plays YUV data after merging the YUV data, while the monitoring platform encodes merged YUV data after merging the YUV data, so as to generate the merged video data.
  • Step 509A Send the merged video data to the first monitoring terminal.
  • Step 510A The first monitoring terminal directly decodes and plays the merged video data.
  • Steps 507B-508B are the same as steps 407B-408B.
  • a third exemplary implementation manner is as follows: As shown in FIG. 10 , the peripheral unit generates complete video data, the monitoring platform obtains the masked video data and the non-masked video data by separating the complete video data received from the peripheral unit, and separately sends the monitoring terminal the non-masked video data and the masked video data that a user has permission to acquire, and the monitoring terminal merges and plays the received masked video data and non-masked video data.
  • Step 601 is the same as step 401, and therefore no further details are provided.
  • Step 602 The peripheral unit encodes a captured video picture into complete video data and sends the complete video data to the monitoring platform.
  • Step 603 The monitoring platform obtains the masked video data corresponding to a masked area and the non-masked video data corresponding to a non-masked area by separating, according to the description information of a masked area received in step 601, the complete video data.
  • a video picture in the complete video data may be encoded into one channel of video data, that is, the masked video data includes one channel of video data, where the video picture corresponds to the masked area.
  • video pictures in the complete video data that correspond to the multiple areas included in the masked area may be encoded into one channel of video data, that is, the masked video data includes one channel of video data; or video pictures in the complete video data that correspond to the multiple areas included in the masked area may be encoded into one channel of video data each, that is, the masked video data includes multiple channels of video data and each area corresponds to one channel of video data; or video pictures corresponding to areas with same permission among the multiple areas included in the masked area may be encoded into one channel of video data, that is, the areas corresponding to the same permission correspond to a same channel of video data, for example, if the masked area includes three areas, area 1 and area 2 correspond to same permission, and area 3 corresponds to another permission, video pictures corresponding to area 1 and area 2 are encoded into a same channel of video data, and a video picture corresponding to area 3 is encoded into another channel of video data.
  • the video picture corresponding to the masked area may be directly encoded into the masked video data. This includes: decoding the complete video data to obtain a complete video data frame and extracting pixel data of the video picture in the complete video data frame to generate a video data frame of the masked video data, where the video picture corresponds to the masked area.
  • a video picture in the whole captured video picture may also be encoded after filling the video picture by using a set pixel value so as to generate the masked video data, where the video picture corresponds to the non-masked area.
  • the obtaining the non-masked video data corresponding to a non-masked area may specifically be directly encoding the video picture corresponding to the non-masked area into the non-masked video data, which includes decoding the complete video data to obtain a complete video data frame and extracting pixel data of the video picture in the complete video data frame to generate the video data frame of the non-masked video data, where the video picture corresponds to the non-masked area; or may specifically be encoding the video picture in the whole video picture after filing the video picture by using a set pixel value so as to generate the non-masked video data, where the video picture corresponds to the masked area, which includes: decoding the complete video data to obtain a complete video data frame and setting a pixel value of a pixel of a pixel area in the complete video data frame as the set pixel value, where the pixel area corresponds to the masked area, and the set pixel value is preferably RGB (0, 0, 0).
  • Encoding formats include but are not limited to H.264, MPEG4, and MJPEG.
  • Steps 604-605 are the same as steps 405-406.
  • steps 606A-608A are performed.
  • steps 606B-607B are performed.
  • Steps 606A-608A are the same as steps 407A-409A.
  • Steps 606B-607B are the same as steps 407B-408B.
  • a second embodiment of the present invention provides a monitoring platform 500.
  • the monitoring platform includes a video request receiving unit 501, a determining unit 502, an acquiring unit 503, and a video data sending unit 504.
  • the video request receiving unit 501 is configured to receive a video request sent by a first monitoring terminal, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the determining unit 502 is configured to determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the acquiring unit 503 is configured to acquire the non-masked video data and configured to acquire the first masked video data when a determined result of the determining unit 502 is yes.
  • the video data sending unit 504 is configured to: when the determined result of the determining unit 502 is yes, send the first monitoring terminal the first masked video data and the non-masked video data that are acquired by the acquiring unit 503, so that the first monitoring terminal merges and plays the first masked video data and the non-masked video data, or merge the first masked video data and the non-masked video data that are acquired by the acquiring unit 503 to obtain merged video data, and send the merged video data to the first monitoring terminal; and further configured to: when the determined result of the determining unit 502 is no, send the first monitoring terminal the non-masked video data acquired by the acquiring unit 503.
  • the monitoring platform further includes a setting request receiving unit 505.
  • the setting request receiving unit 505 is configured to receive a masked area setting request sent by a second monitoring terminal, where the masked area setting request includes the device identifier of the peripheral unit and description information of the masked area.
  • the monitoring platform further includes a description information sending unit 506 and a first video data receiving unit 507.
  • the description information sending unit 506 is configured to send the description information of the masked area to the peripheral unit; and the first video data receiving unit 507 is configured to receive the non-masked video data and the masked video data that are sent by the peripheral unit and generated according to the description information of the masked area.
  • the monitoring platform further includes a second video data receiving unit 508 and a video data separating unit 509.
  • the second video data receiving unit 508 is configured to receive complete video data sent by the peripheral unit; and the video data separating unit 509 is configured to obtain the masked video data and the non-masked video data by separating the complete video data received by the second video data receiving unit.
  • the monitoring platform further includes a storing unit and an association establishing unit.
  • the storing unit is configured to store the masked video data into a masked video file and store the non-masked video data into a non-masked video file, and the masked video file includes one or more video files.
  • the association establishing unit is configured to establish an association between the masked video file and the non-masked video file.
  • the video request receiving unit 501 is specifically configured to receive a video request that includes view time and is sent by the first monitoring terminal.
  • the acquiring unit 503 is specifically configured to acquire video data corresponding to the view time from the non-masked video file, and further specifically configured to acquire, according to the association established by the association establishing unit, one or more video files that correspond to the first masked area and are associated with the non-masked video file and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area when the determined result of the determining unit 502 is yes.
  • the association establishing unit is specifically configured to record a non-masked video index and a masked video index and establish an association between the non-masked video index and the masked video index, where the non-masked video index includes the device identifier of the peripheral unit, video start time and end time, indication information of the non-masked video data, and an identifier of the non-masked video file, and the masked video index includes indication information of the masked video data and an identifier of the masked video file.
  • the acquiring unit 503 is specifically configured to obtain, through matching, the non-masked video index according to the device identifier of the peripheral unit and the view time that are included in the video request and the indication information of the non-masked video data, the device identifier of the peripheral unit, and the video start time and end time that are included in the non-masked video index, acquire the non-masked video file according to the identifier of the non-masked video file included in the non-masked video index, and acquire the video data corresponding to the view time from the non-masked video file; and further specifically configured to acquire, when the determined result of the determining unit 502 is yes, the masked video index associated with the non-masked video index according to the association, acquire, according to the identifier of the masked video file included in the masked video index, one or more video files corresponding to the first masked area, and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area.
  • a functional unit described in the second embodiment of the present invention can be used to implement the method described in the first embodiment.
  • the video request receiving unit 501, the determining unit 502, the setting request receiving unit 505, and the description information sending unit 506 are located on an SCU of the monitoring platform, and the acquiring unit 503, the video data sending unit 504, the first video data receiving unit 507, the second video data receiving unit 508, and the video data separating unit 509 are located on an MU of the monitoring platform.
  • a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • a third embodiment of the present invention provides a monitoring terminal 600.
  • the monitoring terminal includes a video request sending unit 601, a video data receiving unit 602, and a playing unit 603.
  • the video request sending unit 601 is configured to send a video request to a monitoring platform, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the video data receiving unit 602 is configured to receive first masked video data and the non-masked video data that are sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area; and further configured to receive the non-masked video data that is sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data in the masked video data.
  • the playing unit is configured to merge and play the first masked video data and the non-masked video data that are received by the video data receiving unit 602, or configured to play the non-masked video data received by the video data receiving unit 602.
  • the playing unit is specifically configured to decode the first masked video data to obtain a masked video data frame, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in the masked video data frame, add, according to description information of the first masked area, the extracted pixel data to a pixel area in a non-masked video data frame that has a same timestamp as the masked video data frame so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • the playing unit is specifically configured to decode each channel of video data in the first masked video data to obtain a masked video data frame of the channel of video data, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in masked video data frames of all channels of video data, where the masked video data frames have a same timestamp, add the extracted pixel data to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • a functional unit described in the third embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a fourth embodiment of the present invention provides a peripheral unit 700.
  • the peripheral unit includes a description information receiving unit 701, a video data encoding unit 702, and a video data sending unit 703.
  • the description information receiving unit 701 is configured to receive description information of a masked area, where the description information is sent by a monitoring platform.
  • the video data encoding unit 702 is configured to encode, according to the description information of the masked area, a captured video picture into non-masked video data corresponding to a non-masked area and masked video data corresponding to the masked area.
  • the video data sending unit 703 is configured to send the non-masked video data and the masked video data to the monitoring platform, so that the monitoring platform sends the non-masked video data and first masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data, or sends the non-masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the video data encoding unit 702 is specifically configured to: when the masked area includes one area, encode a video picture in the captured video picture into one channel of video data according to the description information of the masked area, where the video picture corresponds to the masked area; or when the masked area includes multiple areas, encode video pictures in the captured video picture into one channel of video data according to the description information of the masked area, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data each, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data, where the video pictures correspond to areas with same permission among the multiple areas included in the masked area; and further specifically configured to encode a video picture in the captured video picture into the non-masked video data according to the description information of the masked area, where the video picture corresponds to the non-masked area.
  • a functional unit described in the fourth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a fifth embodiment of the present invention provides a monitoring platform 1000, including:
  • the processor 1010, the communications interface 1020, and the memory 1030 complete communication between each other through the bus 1040.
  • the communications interface 1020 is configured to communicate with a network element, for example, communicate with a monitoring terminal or a peripheral unit.
  • the processor 1010 is configured to execute a program 1032.
  • the program 1032 may include a program code, and the program code includes a computer operation instruction.
  • the processor 1010 is configured to perform a computer program stored in the memory and may specifically be a central processing unit (CPU, central processing unit), which is a core unit of a computer.
  • CPU central processing unit
  • the memory 1030 is configured to store the program 1032.
  • the memory 1030 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • the program 1032 may specifically include a video request receiving unit 1032-1, a determining unit 1032-2, an acquiring unit 1032-3, and a video data sending unit 1032-4.
  • the video request receiving unit 1032-1 is configured to receive a video request sent by a first monitoring terminal, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the determining unit 1032-2 is configured to determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the acquiring unit 1032-3 is configured to acquire the non-masked video data and configured to acquire the first masked video data when a determined result of the determining unit 1032-2 is yes.
  • the video data sending unit 1032-4 is configured to: when the determined result of the determining unit 1032-2 is yes, send the first monitoring terminal the first masked video data and the non-masked video data that are acquired by the acquiring unit 1032-3, so that the first monitoring terminal merges and plays the first masked video data and the non-masked video data, or merge the first masked video data and the non-masked video data that are acquired by the acquiring unit 1032-3 to obtain merged video data, and send the merged video data to the first monitoring terminal; and further configured to: when the determined result of the determining unit 1032-2 is no, send the first monitoring terminal the non-masked video data acquired by the acquiring unit 1032-3.
  • the program further includes a setting request receiving unit 1032-5.
  • the setting request receiving unit 1032-5 is configured to receive a masked area setting request sent by a second monitoring terminal, where the masked area setting request includes the device identifier of the peripheral unit and description information of the masked area.
  • the monitoring platform further includes a description information sending unit 1032-6 and a first video data receiving unit 1032-7.
  • the description information sending unit 1032-6 is configured to send the description information of the masked area to the peripheral unit; and the first video data receiving unit 1032-7 is configured to receive the non-masked video data and the masked video data that are sent by the peripheral unit and generated according to the description information of the masked area.
  • the monitoring platform further includes a second video data receiving unit 1032-8 and a video data separating unit 1032-9.
  • the second video data receiving unit 1032-8 is configured to receive complete video data sent by the peripheral unit; and the video data separating unit 1032-9 is configured to obtain the masked video data and the non-masked video data by separating the complete video data received by the second video data receiving unit.
  • the program further includes a storing unit and an association establishing unit.
  • the storing unit is configured to store the masked video data into a masked video file and store the non-masked video data into a non-masked video file, and the masked video file includes one or more video files.
  • the association establishing unit is configured to establish an association between the masked video file and the non-masked video file.
  • the video request receiving unit 1032-1 is specifically configured to receive a video request that includes view time and is sent by the first monitoring terminal.
  • the acquiring unit 1032-3 is specifically configured to acquire video data corresponding to the view time from the non-masked video file, and further specifically configured to acquire, according to the association established by the association establishing unit, one or more video files that correspond to the first masked area and are associated with the non-masked video file and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area when the determined result of the determining unit 1032-2 is yes.
  • the association establishing unit is specifically configured to record a non-masked video index and a masked video index and establish an association between the non-masked video index and the masked video index, where the non-masked video index includes the device identifier of the peripheral unit, video start time and end time, indication information of the non-masked video data, and an identifier of the non-masked video file, and the masked video index includes indication information of the masked video data and an identifier of the masked video file.
  • the acquiring unit 1032-3 is specifically configured to obtain, through matching, the non-masked video index according to the device identifier of the peripheral unit and the view time that are included in the video request and the indication information of the non-masked video data, the device identifier of the peripheral unit, and the video start time and end time that are included in the non-masked video index, acquire the non-masked video file according to the identifier of the non-masked video file included in the non-masked video index, and acquire the video data corresponding to the view time from the non-masked video file; and further specifically configured to acquire, when the determined result of the determining unit 1032-2 is yes, the masked video index associated with the non-masked video index according to the association, acquire, according to the identifier of the masked video file included in the masked video index, one or more video files corresponding to the first masked area, and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area.
  • each unit in the program 1032 refers to a corresponding unit in the second embodiment of the present invention, and therefore no further details are provided herein.
  • a functional unit described in the fifth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • a sixth embodiment of the present invention provides a monitoring terminal 2000, including:
  • the processor 2010, the communications interface 2020, and the memory 2030 complete communication between each other through the bus 2040.
  • the communications interface 2020 is configured to communicate with a network element, for example, communicate with a monitoring platform.
  • the processor 2010 is configured to execute a program 2032.
  • the program 2032 may include a program code, and the program code includes a computer operation instruction.
  • the processor 2010 is configured to perform a computer program stored in the memory and may specifically be a central processing unit (CPU, central processing unit), which is a core unit of a computer.
  • CPU central processing unit
  • the memory 2030 is configured to store the program 2032.
  • the memory 2030 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • the program 2032 may specifically include a video request sending unit 2032-1, a video data receiving unit 2032-2, and a playing unit 2032-3.
  • the video request sending unit is configured to send a video request to a monitoring platform, the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the video data receiving unit 2032-2 is configured to receive first masked video data and the non-masked video data that are sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area; and further configured to receive the non-masked video data that is sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data in the masked video data.
  • the playing unit is configured to merge and play the first masked video data and the non-masked video data that are received by the video data receiving unit 2032-2, or configured to play the non-masked video data received by the video data receiving unit 2032-2.
  • the playing unit is specifically configured to decode the first masked video data to obtain a masked video data frame, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in the masked video data frame, add, according to description information of the first masked area, the extracted pixel data to a pixel area in a non-masked video data frame that has a same timestamp as the masked video data frame so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • the playing unit is specifically configured to decode each channel of video data in the first masked video data to obtain a masked video data frame of the channel of video data, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in masked video data frames of all channels of video data, where the masked video data frames have a same timestamp, add the extracted pixel data to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • each unit in the program 2032 refers to a corresponding unit in the third embodiment of the present invention, and therefore no further details are provided herein.
  • a functional unit described in the sixth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a seventh embodiment of the present invention provides a peripheral unit 3000, including:
  • the processor 3010, the communications interface 3020, and the memory 3030 complete communication between each other through the bus 3040.
  • the communications interface 3020 is configured to communicate with a network element, for example, communicate with a monitoring platform.
  • the processor 3010 is configured to execute a program 3032.
  • the program 3032 may include a program code, and the program code includes a computer operation instruction.
  • the processor 3010 is configured to perform a computer program stored in the memory, and may specifically be a central processing unit (CPU, central processing unit), which is a core unit of a computer.
  • CPU central processing unit
  • the memory 3030 is configured to store the program 3032.
  • the memory 3030 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • the program 3032 may specifically include a description information receiving unit 3032-1, a video data encoding unit 3032-2, and a video data sending unit 3032-3.
  • the description information receiving unit 3032-1 is configured to receive description information of a masked area, where the description information is sent by a monitoring platform;
  • the video data encoding unit 3032-2 is configured to encode, according to the description information of the masked area, a captured video picture into non-masked video data corresponding to a non-masked area and masked video data corresponding to the masked area.
  • the video data sending unit 3032-3 is configured to send the non-masked video data and the masked video data to the monitoring platform, so that the monitoring platform sends the non-masked video data and first masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data, or sends the non-masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the video data encoding unit 3032-2 is specifically configured to: when the masked area includes one area, encode a video picture in the captured video picture into one channel of video data according to the description information of the masked area, where the video picture corresponds to the masked area; or when the masked area includes multiple areas, encode video pictures in the captured video picture into one channel of video data according to the description information of the masked area, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data each, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data, where the video pictures correspond to areas with same permission among the multiple areas included in the masked area; and further specifically configured to encode a video picture in the captured video picture into the non-masked video data according to the description information of the masked area, where the video picture corresponds to the non-masked area.
  • each unit in the program 3032 refers to a corresponding unit in the fourth embodiment of the present invention, and therefore no further details are provided herein.
  • a functional unit described in the seventh embodiment of the present invention can be used to implement the method described in the first embodiment.
  • an eighth embodiment of the present invention provides a video surveillance system 4000.
  • the video surveillance system includes a monitoring terminal 4010 and a monitoring platform 4020.
  • the monitoring terminal 4010 is specifically the monitoring terminal according to the third or the sixth embodiment.
  • the monitoring platform 4020 is specifically the monitoring platform according to the second or the fifth embodiment.
  • the video surveillance system may further include a peripheral unit 4030, which is specifically the peripheral unit according to the fourth or the seventh embodiment.
  • a functional unit described in the eighth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the described apparatus embodiment is merely exemplary.
  • the unit division is merely logical function division and may be other division in actual implementation.
  • a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces.
  • the indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
  • the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one position, or may be distributed on a plurality of network units. A part or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • functional units in the embodiments of the present invention may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.
  • the functions When the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the present invention essentially, or the part contributing to the prior art, or part of the technical solutions may be implemented in a form of a software product.
  • the computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or a part of the steps of the methods described in the embodiment of the present invention.
  • the foregoing storage medium includes: any medium that can store a program code, such as a USB flash disk, a removable hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk, or an optical disk.
  • a program code such as a USB flash disk, a removable hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Claims (13)

  1. Verfahren zur Durchführung eines Videozugriffs, wobei eine Überwachungsplattform über ein Übertragungsnetzwerk mit einem ersten Überwachungsendgerät kommuniziert, wobei das Verfahren umfasst:
    Empfangen einer von dem ersten Überwachungsendgerät gesendeten Videoanforderung an der Überwachungsplattform, wobei die Videoanforderung eine Vorrichtungskennung umfasst und wobei Videodaten einer durch die Vorrichtungskennung identifizierten peripheren Einheit Videodaten eines nicht maskierten Bereichs umfassen, die einem nicht maskierten Bereich entsprechen, sowie Videodaten eines maskierten Bereichs, die einem maskierten Bereich entsprechen, wobei verschiedene Bereiche jeweilige Rechte besitzen;
    Bestimmen durch die Überwachungsplattform, ob ein Anwender des ersten Überwachungsendgeräts berechtigt ist, Videodaten eines ersten maskierten Bereichs in den Videodaten eines maskierten Bereichs zu erfassen, wobei die Videodaten eines ersten maskierten Bereichs einem ersten maskierten Bereich mit einem jeweiligen Recht entsprechen und der erste maskierte Bereich einen Teil des maskierten Bereichs umfasst; und
    bei zustimmender Bestimmung Erfassen der Videodaten eines ersten maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs; Senden von Beschreibungsinformationen des ersten maskierten Bereichs an das erste Überwachungsendgerät und anschließendes Senden der Videodaten eines ersten maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs an das erste Überwachungsendgerät, wobei die an das erste Überwachungsendgerät zu sendenden Beschreibungsinformationen des ersten maskierten Bereichs vom ersten Überwachungsendgerät zum Zusammenführen der Videodaten eines ersten maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs verwendet werden, oder Zusammenführen der erfassten Videodaten eines ersten maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs zum Erhalt von zusammengeführten Videodaten gemäß den Beschreibungsinformationen des ersten maskierten Bereichs und Senden der zusammengeführten Videodaten an das erste Überwachungsendgerät; wobei die Beschreibungsinformationen des ersten maskierten Bereichs von einer Anforderung zum Einstellen eines maskierten Bereichs stammen, die von einem zweiten Überwachungsendgerät empfangen wird, und eine Koordinate des ersten maskierten Bereichs umfassen; und
    bei ablehnender Bestimmung Erfassen der Videodaten eines nicht maskierten Bereichs und Senden der Videodaten eines nicht maskierten Bereichs an das erste Überwachungsendgerät, wobei
    das Verfahren vor dem Empfangen einer von einem ersten Überwachungsendgerät gesendeten Videoanforderung Folgendes umfasst:
    Empfangen der von einem zweiten Überwachungsendgerät gesendeten Anforderung zum Einstellen eines maskierten Bereichs, wobei die Anforderung zum Einstellen eines maskierten Bereichs die Vorrichtungskennung der peripheren Einheit und Beschreibungsinformationen des maskierten Bereichs umfasst; und
    Senden der Beschreibungsinformationen des maskierten Bereichs an die periphere Einheit und Empfangen der Videodaten eines nicht maskierten Bereichs und der Videodaten eines maskierten Bereichs, die von der peripheren Einheit gesendet und in Übereinstimmung mit den Beschreibungsinformationen des maskierten Bereichs erzeugt werden; oder Erhalten der Videodaten eines maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs durch Trennen der vollständigen von der peripheren Einheit erhaltenen Videodaten in Übereinstimmung mit den Beschreibungsinformationen des maskierten Bereichs.
  2. Verfahren nach Anspruch 1, wobei
    das Verfahren vor dem Erfassen der Videodaten eines ersten maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs Folgendes umfasst:
    Speichern der Videodaten eines maskierten Bereichs in einer maskierten Videodatei, Speichern der Videodaten eines nicht maskierten Bereichs in einer nicht maskierten Videodatei und Errichten einer Verbindung zwischen der maskierten Videodatei und der nicht maskierten Videodatei, wobei die maskierte Videodatei eine oder mehrere Videodateien umfasst;
    wobei die Videoanforderung Ansichtszeit umfasst;
    wobei das Erfassen der Videodaten eines nicht maskierten Bereichs insbesondere umfasst: Erfassen von Videodaten, die der Ansichtszeit der nicht maskierten Videodatei entsprechen; und
    wobei das Erfassen der Videodaten eines ersten maskierten Bereichs insbesondere umfasst: Erfassen einer oder mehrerer Videodateien in Übereinstimmung mit der Verbindung, die dem ersten maskierten Bereich entsprechen und mit der nicht maskierten Videodatei verbunden sind, und Erfassen von Videodaten, die der Ansichtszeit der einen oder mehreren dem ersten maskierten Bereich entsprechenden Videodateien entsprechen.
  3. Verfahren nach Anspruch 2, wobei
    das Errichten einer Verbindung zwischen der maskierten Videodatei und der nicht maskierten Videodatei insbesondere umfasst:
    Aufzeichnen eines nicht maskierten Videoindexes und eines maskierten Videoindexes, wobei der nicht maskierte Video index die Vorrichtungskennung der peripheren Einheit, Anfangs- und Endzeitpunkt des Videos, Hinweisinformationen auf die Videodaten eines nicht maskierten Bereichs und eine Kennung der nicht maskierten Videodatei umfasst und der maskierte Videoindex Hinweisinformationen auf die Videodaten eines maskierten Bereichs und eine Kennung der maskierten Videodatei umfasst; und Errichten einer Verbindung zwischen dem nicht maskierten Videoindex und dem maskierten Videoindex;
    das Erfassen der Videodaten eines nicht maskierten Bereichs insbesondere umfasst: Erhalten, durch Abgleichen, des nicht maskierten Videoindexes in Übereinstimmung mit der Vorrichtungskennung der peripheren Einheit und der Ansichtszeit, die in der Videoanforderung enthalten sind, sowie den Hinweisinformationen auf die Videodaten eines nicht maskierten Bereichs, der Vorrichtungskennung der peripheren Einheit und dem Anfangs- und Endzeitpunkt des Videos, die in dem nicht maskierten Videoindex enthalten sind, Erfassen der nicht maskierten Videodatei in Übereinstimmung mit der Kennung der nicht maskierten Videodatei, die in dem nicht maskierten Videoindex enthalten ist, und Erfassen der Videodaten, die der Ansichtszeit der nicht maskierten Videodatei entsprechen; und
    das Erfassen der Videodaten eines ersten maskierten Bereichs insbesondere umfasst: Erfassen des mit dem nicht maskierten Videoindex verbundenen maskierten Videoindexes in Übereinstimmung mit der Verbindung, Erfassen einer oder mehrerer Videodateien, die dem ersten maskierten Bereich entsprechen, in Übereinstimmung mit der Kennung der maskierten Videodatei, die in dem maskierten Videoindex enthalten ist, und Erfassen der Videodaten, die der Ansichtszeit der einen oder mehreren dem ersten maskierten Bereich entsprechenden Videodateien entsprechen.
  4. Verfahren nach Anspruch 1, wobei
    das Erfassen der Videodaten eines ersten maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs; und das Senden der Videodaten eines ersten maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs an das erste Überwachungsendgerät insbesondere umfassen:
    Erzeugen einer Erfassungsadresse der Videodaten eines nicht maskierten Bereichs und einer Erfassungsadresse der Videodaten eines ersten maskierten Bereichs und Senden der Erfassungsadressen an das erste Überwachungsendgerät, wobei die Erfassungsadresse der Videodaten eines ersten maskierten Bereichs oder eine Nachricht mit der Erfassungsadresse der Videodaten eines maskierten Bereichs einen Datentyp umfasst, der zu der Angabe verwendet wird, dass die der Erfassungsadresse entsprechenden Videodaten Videodaten eines maskierten Bereichs sind;
    Empfangen einer Anforderung, die vom ersten Überwachungsendgerät gesendet wird und die die Erfassungsadresse der Videodaten eines nicht maskierten Bereichs umfasst, Errichten eines Medienkanals mit dem ersten Überwachungsendgerät in Übereinstimmung mit der Erfassungsadresse der Videodaten eines nicht maskierten Bereichs, der zum Senden der Videodaten eines nicht maskierten Bereichs verwendet wird, Erfassen der Videodaten eines nicht maskierten Bereichs in Übereinstimmung mit der Erfassungsadresse der Videodaten eines nicht maskierten Bereichs und Senden der Videodaten eines nicht maskierten Bereichs über den Medienkanal; und
    Empfangen einer Anforderung, die vom ersten Überwachungsendgerät gesendet wird und die die Erfassungsadresse der Videodaten eines ersten maskierten Bereichs umfasst, Errichten eines Medienkanals mit dem ersten Überwachungsendgerät in Übereinstimmung mit der Erfassungsadresse der Videodaten eines ersten maskierten Bereichs, der zum Senden der Videodaten eines ersten maskierten Bereichs verwendet wird, Erfassen der Videodaten eines ersten maskierten Bereichs in Übereinstimmung mit der Erfassungsadresse der Videodaten eines ersten maskierten Bereichs und Senden der Videodaten eines ersten maskierten Bereichs über den Medienkanal.
  5. Verfahren zur Durchführung eines Videozugriffs, wobei eine periphere Einheit über ein Übertragungsnetzwerk mit einer Überwachungsplattform kommuniziert, wobei das Verfahren umfasst:
    Empfangen von Beschreibungsinformationen eines maskierten Bereichs, wobei verschiedene Bereiche jeweilige Rechte besitzen, durch die periphere Einheit, wobei die Beschreibungsinformationen von der Überwachungsplattform gesendet werden, wobei die Beschreibungsinformationen des maskierten Bereichs eine Koordinate des maskierten Bereichs umfassen;
    Verschlüsseln eines erfassten Videobilds in Videodaten eines nicht maskierten Bereichs, die einem nicht maskierten Bereich entsprechen, und Videodaten eines maskierten Bereichs, die dem maskierten Bereich entsprechen, durch die periphere Einheit in Übereinstimmung mit den Beschreibungsinformationen des maskierten Bereichs; und
    Senden der Videodaten eines nicht maskierten Bereichs und der Videodaten eines maskierten Bereichs von der peripheren Einheit an die Überwachungsplattform, sodass: die Überwachungsplattform die Videodaten eines nicht maskierten Bereichs und Videodaten eines ersten maskierten Bereichs an ein Überwachungsendgerät sendet, wenn die Überwachungsplattform bestimmt, dass ein Anwender des Überwachungsendgeräts berechtigt ist, die Videodaten eines ersten maskierten Bereichs zu erfassen, und die Videodaten eines nicht maskierten Bereichs an ein Überwachungsendgerät sendet, wenn die Überwachungsplattform bestimmt, dass ein Anwender des Überwachungsendgeräts nicht berechtigt ist, Videodaten eines ersten maskierten Bereichs zu erfassen, wobei die Videodaten eines ersten maskierten Bereichs einem ersten maskierten Bereich mit einem jeweiligen Recht entsprechen und der erste maskierte Bereich einen Teil des maskierten Bereichs umfasst.
  6. Verfahren nach Anspruch 5, wobei
    das Verschlüsseln eines erfassten Videobilds in Videodaten eines maskierten Bereichs, die dem maskierten Bereich entsprechen, in Übereinstimmung mit den Beschreibungsinformationen des maskierten Bereichs insbesondere umfasst:
    Verschlüsseln eines Videobilds in dem erfassten Videobild in einen Kanal von Videodaten, wobei das Videobild dem maskierten Bereich entspricht, wenn der maskierte Bereich einen Bereich umfasst; oder
    Verschlüsseln von Videobildern in dem erfassten Videobild in einen Kanal von Videodaten, wobei die Videobilder den mehreren Bereichen in dem maskierten Bereich entsprechen, oder Verschlüsseln von Videobildern in dem erfassten Videobild in jeweils einen Kanal von Videodaten, wobei die Videobilder den mehreren Bereichen in dem maskierten Bereich entsprechen, oder Verschlüsseln von Videobildern in dem erfassten Videobild in einen Kanal von Videodaten, wobei die Videobilder Bereichen mit demselben Recht der mehreren Bereiche in dem maskierten Bereich entsprechen, wenn der maskierte Bereich mehrere Bereiche umfasst.
  7. Verfahren nach Anspruch 5, wobei
    das Verschlüsseln eines erfassten Videobilds in Videodaten eines maskierten Bereichs, die dem maskierten Bereich entsprechen, in Übereinstimmung mit den Beschreibungsinformationen des maskierten Bereichs insbesondere umfasst: direktes Verschlüsseln eines Videobilds in dem erfassten Videobild in Videodaten eines maskierten Bereichs, wobei das Videobild dem maskierten Bereich entspricht; oder Verschlüsseln eines Videobilds in dem erfassten Videobild nach Befüllen des Videobilds unter Verwendung eines Satzes Pixelwerte, um so die Videodaten eines maskierten Bereichs zu erzeugen, wobei das Videobild dem nicht maskierten Bereich entspricht; und
    das Verschlüsseln eines erfassten Videobilds in Videodaten eines nicht maskierten Bereichs, die dem nicht maskierten Bereich entsprechen, in Übereinstimmung mit den Beschreibungsinformationen des maskierten Bereichs insbesondere umfasst: direktes Verschlüsseln eines Videobilds in dem erfassten Videobild in Videodaten eines nicht maskierten Bereichs, wobei das Videobild dem nicht maskierten Bereich entspricht; oder Verschlüsseln eines Videobilds in dem erfassten Videobild nach Befüllen des Videobilds unter Verwendung eines Satzes Pixelwerte, um so die Videodaten eines nicht maskierten Bereichs zu erzeugen, wobei das Videobild dem maskierten Bereich entspricht.
  8. Überwachungsplattform, wobei die Überwachungsplattform über ein Übertragungsnetzwerk mit einem ersten Überwachungsendgerät kommuniziert, wobei die Überwachungsplattform umfasst: eine Einheit zum Empfangen von Videoanforderungen, eine Bestimmungseinheit, eine Erfassungseinheit und eine Einheit zum Senden von Videodaten, wobei
    die Einheit zum Empfangen von Videoanforderungen zum Empfangen einer von dem ersten Überwachungsendgerät gesendeten Videoanforderung konfiguriert ist, wobei
    die Videoanforderung eine Vorrichtungskennung umfasst und wobei Videodaten einer durch die Vorrichtungskennung identifizierten peripheren Einheit Videodaten eines nicht maskierten Bereichs umfassen, die einem nicht maskierten Bereich entsprechen, sowie Videodaten eines maskierten Bereichs, die einem maskierten Bereich entsprechen, wobei verschiedene Bereiche jeweilige Rechte besitzen;
    die Bestimmungseinheit zum Bestimmen konfiguriert ist, ob ein Anwender des ersten Überwachungsendgeräts berechtigt ist, Videodaten eines ersten maskierten Bereichs in den Videodaten eines maskierten Bereichs zu erfassen, wobei die Videodaten eines ersten maskierten Bereichs einem ersten maskierten Bereich mit einem jeweiligen Recht entsprechen und der erste maskierte Bereich einen Teil des maskierten Bereichs umfasst;
    die Erfassungseinheit dazu konfiguriert ist, bei zustimmender Bestimmung durch die Bestimmungseinheit die Videodaten eines nicht maskierten Bereichs und die Videodaten eines ersten maskierten Bereichs zu erfassen; und
    die Einheit zum Senden von Videodaten zu Folgendem konfiguriert ist: bei zustimmender Bestimmung durch die Bestimmungseinheit Senden der Videodaten eines ersten maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs, die von der Erfassungseinheit erfasst werden, und von Beschreibungsinformationen des ersten maskierten Bereichs an das erste Überwachungsendgerät, wobei die an das erste Übertwachungsendgerät gesendeten Beschreibungsinformationen des ersten maskierten Bereichs von dem ersten Überwachungsendgerät zum Zusammenführen und Abspielen der Videodaten eines ersten maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs verwendet werden, oder Zusammenführen der Videodaten eines ersten maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs, die von der Erfassungseinheit erfasst werden, zum Erhalt von zusammengeführten Videodaten gemäß den Beschreibungsinformationen des ersten maskierten Bereichs und Senden der zusammengeführten Videodaten an das erste Überwachungsendgerät; wobei die Beschreibungsinformationen des ersten maskierten Bereichs von einer Anforderung zum Einstellen eines maskierten Bereichs stammen, die von einem zweiten Überwachungsendgerät empfangen wird, und eine Koordinate des ersten maskierten Bereichs umfassen; wobei die Einheit weiterhin zu Folgendem konfiguriert ist: bei ablehnender Bestimmung durch die Bestimmungseinheit Senden der Videodaten eines nicht maskierten Bereichs, die von der Erfassungseinheit erfasst wurden, an das erste Überwachungsendgerät; wobei
    die Überwachungsplattform weiterhin umfasst: eine Einheit zum Empfangen einer Einstellanforderung, eine Einheit zum Senden von Beschreibungsinformationen und eine erste Einheit zum Empfangen von Videodaten; wobei die Einheit zum Empfangen einer Einstellanforderung zum Empfangen einer Anforderung zum Einstellen eines maskierten Bereichs konfiguriert ist, die von einem zweiten Überwachungsendgerät gesendet wird, wobei die Anforderung zum Einstellen eines maskierten Bereichs eine Vorrichtungskennung der peripheren Einheit und Beschreibungsinformationen des maskierten Bereichs umfasst; wobei die Einheit zum Senden von Beschreibungsinformationen zum Senden der Beschreibungsinformationen des maskierten Bereichs an die periphere Einheit konfiguriert ist; und die erste Einheit zum Empfangen von Videodaten zum Empfangen der Videodaten eines nicht maskierten Bereichs und der Videodaten eines maskierten Bereichs konfiguriert ist, die von der peripheren Einheit gesendet und in Übereinstimmung mit den Beschreibungsinformationen des maskierten Bereichs erzeugt werden; oder
    die Überwachungsplattform weiterhin umfasst: eine Einheit zum Empfangen einer Einstellanforderung, eine zweite Einheit zum Empfangen von Videodaten und eine Einheit zum Trennen von Videodaten; wobei die Einheit zum Empfangen einer Einstellanforderung zum Empfangen einer Anforderung zum Einstellen eines maskierten Bereichs konfiguriert ist, die von einem zweiten Überwachungsendgerät gesendet wird, wobei die Anforderung zum Einstellen eines maskierten Bereichs eine Vorrichtungskennung der peripheren Einheit und Beschreibungsinformationen des maskierten Bereichs umfasst; wobei die zweite Einheit zum Empfangen von Videodaten zum Empfangen vollständiger Videodaten konfiguriert ist, die von der peripheren Einheit gesendet wurden; und die Einheit zum Trennen von Videodaten zum Erhalten der Videodaten eines maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs durch Trennen der von der zweiten Einheit zum Empfangen von Videodaten empfangenen vollständigen Videodaten konfiguriert ist.
  9. Überwachungsplattform nach Anspruch 8, ferner umfassend: eine Speichereinheit und eine Verbindungen errichtende Einheit, wobei
    die Speichereinheit zum Speichern der Videodaten eines maskierten Bereichs in einer maskierten Videodatei und zum Speichern der Videodaten eines nicht maskierten Bereichs in einer nicht maskierten Videodatei konfiguriert ist, wobei die maskierte Videodatei eine oder mehrere Videodateien umfasst;
    die Verbindungen errichtende Einheit zum Errichten einer Verbindung zwischen der maskierten Videodatei und der nicht maskierten Videodatei konfiguriert ist;
    die Einheit zum Empfangen von Videoanforderungen insbesondere zum Empfangen einer Videoanforderung konfiguriert ist, die die Ansichtszeit umfasst und die von dem ersten Überwachungsendgerät gesendet wird; und
    die Erfassungseinheit bei zustimmender Bestimmung durch die Bestimmungseinheit insbesondere zum Erfassen von Videodaten konfiguriert ist, die der Ansichtszeit der nicht maskierten Videodatei entsprechen, und ferner insbesondere zum Erfassen einer oder mehrerer Videodateien, die dem ersten maskierten Bereich entsprechen und mit der nicht maskierten Videodatei verbunden sind, und zum Erfassen von Videodaten,
    die der Ansichtszeit der einen oder mehreren Videodateien entsprechen, die dem ersten maskierten Bereich entsprechen, in Übereinstimmung mit der von der Verbindungen errichtenden Einheit errichteten Verbindung konfiguriert ist.
  10. Periphere Einheit, wobei die periphere Einheit über ein Übertragungsnetzwerk mit einer Überwachungsplattform kommuniziert, wobei die periphere Einheit umfasst:
    eine Einheit zum Empfangen von Beschreibungsinformationen, eine Einheit zum Verschlüsseln von Videodaten und eine Einheit zum Senden von Videodaten, wobei die Einheit zum Empfangen von Beschreibungsinformationen zum Empfangen von Beschreibungsinformationen eines maskierten Bereichs konfiguriert ist, wobei verschiedene Bereiche jeweilige Rechte besitzen, wobei die Beschreibungsinformationen von der Überwachungsplattform gesendet werden, wobei die Beschreibungsinformationen des maskierten Bereichs eine Koordinate des maskierten Bereichs umfassen;
    die Einheit zum Verschlüsseln von Videodaten zum Verschlüsseln eines erfassten Videobilds in Videodaten eines nicht maskierten Bereichs, die einem nicht maskierten Bereich entsprechen, und Videodaten eines maskierten Bereichs, die dem maskierten Bereich entsprechen, in Übereinstimmung mit den Beschreibungsinformationen des maskierten Bereichs konfiguriert ist; und
    die Einheit zum Senden von Videodaten zum Senden der Videodaten eines nicht maskierten Bereichs und der Videodaten eines maskierten Bereichs an die Überwachungsplattform konfiguriert ist.
  11. Periphere Einheit nach Anspruch 10, wobei
    die Einheit zum Verschlüsseln von Videodaten insbesondere zu Folgendem konfiguriert ist: Verschlüsseln von Videobildern im erfassten Videobild in einen Kanal von Videodaten in Übereinstimmung mit den Beschreibungsinformationen des maskierten Bereichs, wobei die Videobilder den mehreren Bereichen in dem maskierten Bereich entsprechen, oder Verschlüsseln von Videobildern im erfassten Videobild in jeweils einen Kanal von Videodaten, wobei die Videobilder den mehreren Bereichen im maskierten Bereich entsprechen, oder Verschlüsseln von Videobildern im erfassten Videobild in einen Kanal von Videodaten, wobei die Videobilder Bereichen mit demselben Recht der mehreren Bereiche in dem maskierten Bereich entsprechen; und ferner insbesondere zum Verschlüsseln eines Videobilds im erfassten Videobild in die Videodaten eines nicht maskierten Bereichs in Übereinstimmung mit den Beschreibungsinformationen des maskierten Bereichs konfiguriert ist, wobei das Videobild dem nicht maskierten Bereich entspricht.
  12. Videoüberwachungssystem, umfassend: ein Überwachungsendgerät und eine Überwachungsplattform, wobei die Überwachungsplattform über ein Übertragungsnetzwerk mit dem Überwachungsendgerät kommuniziert:
    wobei das Überwachungsendgerät eine Einheit zum Senden von Videoanforderungen, eine Einheit zum Empfangen von Videodaten und eine Abspieleinheit umfasst; wobei
    die Einheit zum Senden von Videoanforderungen zum Senden einer Videoanforderung an die Überwachungsplattform konfiguriert ist, wobei die Videoanforderung eine Vorrichtungskennung umfasst und wobei Videodaten einer durch die Vorrichtungskennung identifizierten peripheren Einheit Videodaten eines nicht maskierten Bereichs umfassen, die einem nicht maskierten Bereich entsprechen, sowie Videodaten eines maskierten Bereichs, die einem maskierten Bereich entsprechen, wobei verschiedene Bereiche jeweilige Rechte besitzen;
    die Einheit zum Empfangen von Videodaten zum Empfangen von Videodaten eines ersten maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs konfiguriert ist, die von der Überwachungsplattform gesendet werden, wenn die Überwachungsplattform bestimmt, dass ein Anwender des Überwachungsendgeräts berechtigt ist, die Videodaten eines ersten maskierten Bereichs in den maskierten Videodaten zu erfassen, wobei die Videodaten eines ersten maskierten Bereichs einem ersten maskierten Bereich mit einem jeweiligen Recht entsprechen und der erste maskierte Bereich den maskierten Bereich ganz oder teilweise umfasst; die Einheit weiterhin zum Empfangen der Videodaten eines nicht maskierten Bereichs konfiguriert ist, die von der Überwachungsplattform gesendet werden, wenn die Überwachungsplattform bestimmt, dass ein Anwender des Überwachungsendgeräts nicht berechtigt ist, die Videodaten eines ersten maskierten Bereichs in den Videodaten eines maskierten Bereichs zu erfassen; und
    die Abspieleinheit zum Zusammenführen und Abspielen der Videodaten eines ersten maskierten Bereichs und der Videodaten eines nicht maskierten Bereichs konfiguriert ist, die von der Einheit zum Empfangen von Videodaten empfangen werden, oder zum Abspielen der Videodaten eines nicht maskierten Bereichs konfiguriert ist, die von der Einheit zum Empfangen von Videodaten empfangen werden; und
    die Überwachungsplattform insbesondere die Überwachungsplattform nach einem der Ansprüche 8-9 ist.
  13. Videoüberwachungssystem nach Anspruch 12, weiterhin umfassend eine periphere Einheit, wobei
    die periphere Einheit insbesondere die periphere Einheit nach Anspruch 10 oder 11 ist.
EP12872312.9A 2012-10-11 2012-10-11 Verfahren, vorrichtung und system zur implementierung einer videosperre Active EP2741237B1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/082784 WO2014056171A1 (zh) 2012-10-11 2012-10-11 一种实现视频遮挡的方法、装置和系统

Publications (3)

Publication Number Publication Date
EP2741237A1 EP2741237A1 (de) 2014-06-11
EP2741237A4 EP2741237A4 (de) 2014-07-16
EP2741237B1 true EP2741237B1 (de) 2017-08-09

Family

ID=50476881

Family Applications (1)

Application Number Title Priority Date Filing Date
EP12872312.9A Active EP2741237B1 (de) 2012-10-11 2012-10-11 Verfahren, vorrichtung und system zur implementierung einer videosperre

Country Status (3)

Country Link
EP (1) EP2741237B1 (de)
CN (1) CN103890783B (de)
WO (1) WO2014056171A1 (de)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111614930A (zh) * 2019-02-22 2020-09-01 浙江宇视科技有限公司 一种视频监控方法、系统、设备及计算机可读存储介质

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105208340B (zh) * 2015-09-24 2019-10-18 浙江宇视科技有限公司 一种视频数据的显示方法和装置
KR102051985B1 (ko) * 2015-09-30 2019-12-04 애플 인크. 이질적인 네트워킹 환경들에서 미디어 렌더링의 동기화
CN105866853B (zh) * 2016-04-13 2019-01-01 同方威视技术股份有限公司 安检监视控制系统和安检监视终端
CN106341664B (zh) * 2016-09-29 2019-12-13 浙江宇视科技有限公司 一种数据处理方法及装置
CN108206930A (zh) 2016-12-16 2018-06-26 杭州海康威视数字技术股份有限公司 基于隐私遮蔽显示图像的方法及装置
CN110324704A (zh) * 2018-03-29 2019-10-11 优酷网络技术(北京)有限公司 视频处理方法及装置
CN109063499B (zh) * 2018-07-27 2021-02-26 山东鲁能软件技术有限公司 一种灵活可配置的电子档案区域授权方法及系统
US11030212B2 (en) * 2018-09-06 2021-06-08 International Business Machines Corporation Redirecting query to view masked data via federation table
CN110958410A (zh) * 2018-09-27 2020-04-03 北京嘀嘀无限科技发展有限公司 视频处理方法、装置及行车记录仪
CN112422637B (zh) * 2020-07-07 2022-10-14 德能森智能科技(成都)有限公司 一种基于隐私管理的家居管理系统及疫情管理系统
CN112954458A (zh) * 2021-01-20 2021-06-11 浙江大华技术股份有限公司 视频遮挡方法、装置、电子装置和存储介质
CN113014949B (zh) * 2021-03-10 2022-05-06 读书郎教育科技有限公司 一种智慧课堂课程回放的学生隐私保护系统及方法
US20230154497A1 (en) 2021-11-18 2023-05-18 Parrot AI, Inc. System and method for access control, group ownership, and redaction of recordings of events
CN114189660A (zh) * 2021-12-24 2022-03-15 威艾特科技(深圳)有限公司 一种基于全向摄像头的监控方法及其系统
CN114419720B (zh) * 2022-03-30 2022-10-18 浙江大华技术股份有限公司 一种图像遮挡方法、系统及计算机可读存储介质
US12118864B2 (en) 2022-08-31 2024-10-15 SimpliSafe, Inc. Security device zones
WO2024050347A1 (en) * 2022-08-31 2024-03-07 SimpliSafe, Inc. Security device zones

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6509926B1 (en) * 2000-02-17 2003-01-21 Sensormatic Electronics Corporation Surveillance apparatus for camera surveillance system
FR2972886A1 (fr) * 2011-03-17 2012-09-21 Thales Sa Procede de compression/decompression de flux video partiellement masques, codeur et decodeur mettant en oeuvre le procede

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006070249A1 (en) * 2004-12-27 2006-07-06 Emitall Surveillance S.A. Efficient scrambling of regions of interest in an image or video to preserve privacy
JP4671133B2 (ja) * 2007-02-09 2011-04-13 富士フイルム株式会社 画像処理装置
CN101610396A (zh) * 2008-06-16 2009-12-23 北京智安邦科技有限公司 具有隐私保护的智能视频监控设备模组和系统及其监控方法
US8576282B2 (en) * 2008-12-12 2013-11-05 Honeywell International Inc. Security system with operator-side privacy zones
CN101710979B (zh) * 2009-12-07 2015-03-04 北京中星微电子有限公司 一种视频监控系统的管理方法及中央管理服务器
CN101848378A (zh) * 2010-06-07 2010-09-29 中兴通讯股份有限公司 一种家庭视频监控的装置、系统及方法
CN102547212A (zh) * 2011-12-13 2012-07-04 浙江元亨通信技术股份有限公司 多路视频图像的拼接方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6509926B1 (en) * 2000-02-17 2003-01-21 Sensormatic Electronics Corporation Surveillance apparatus for camera surveillance system
FR2972886A1 (fr) * 2011-03-17 2012-09-21 Thales Sa Procede de compression/decompression de flux video partiellement masques, codeur et decodeur mettant en oeuvre le procede

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111614930A (zh) * 2019-02-22 2020-09-01 浙江宇视科技有限公司 一种视频监控方法、系统、设备及计算机可读存储介质

Also Published As

Publication number Publication date
EP2741237A4 (de) 2014-07-16
CN103890783B (zh) 2017-02-22
WO2014056171A1 (zh) 2014-04-17
EP2741237A1 (de) 2014-06-11
CN103890783A (zh) 2014-06-25

Similar Documents

Publication Publication Date Title
EP2741237B1 (de) Verfahren, vorrichtung und system zur implementierung einer videosperre
US10594988B2 (en) Image capture apparatus, method for setting mask image, and recording medium
US11023618B2 (en) Systems and methods for detecting modifications in a video clip
JP5346338B2 (ja) ビデオを索引化する方法及びビデオを索引化する装置
CN111133764B (zh) 信息处理设备、信息提供设备、控制方法和存储介质
KR102320455B1 (ko) 미디어 콘텐트를 전송하는 방법, 디바이스, 및 컴퓨터 프로그램
US20180176650A1 (en) Information processing apparatus and information processing method
KR102133207B1 (ko) 통신장치, 통신 제어방법 및 통신 시스템
US10757463B2 (en) Information processing apparatus and information processing method
JPWO2004004350A1 (ja) 画像データ配信システムならびにその画像データ送信装置および画像データ受信装置
WO2021147702A1 (zh) 一种视频处理方法及其装置
US20230045876A1 (en) Video Playing Method, Apparatus, and System, and Computer Storage Medium
CN108810567B (zh) 一种音频与视频视角匹配的方法、客户端和服务器
CN106657110A (zh) 一种流数据的加密传输方法和装置
WO2022111554A1 (zh) 一种视角切换方法及装置
JPWO2004004363A1 (ja) 画像符号化装置、画像送信装置および画像撮影装置
CN107241585B (zh) 视频监控方法及系统
CN113545099B (zh) 信息处理设备、再现处理设备、信息处理方法和再现处理方法
CN110636336A (zh) 发送装置及方法、接收装置及方法及计算机可读存储介质
JP7218105B2 (ja) ファイル生成装置、ファイル生成方法、処理装置、処理方法、及びプログラム
WO2018044731A1 (en) Systems and methods for hybrid network delivery of objects of interest in video
US20240235816A1 (en) Protecting augmented reality call content
CN115695858B (zh) 基于sei加密的虚拟制片视频母片编解码控制方法
WO2024151732A1 (en) Protecting augmented reality call content
JP2024165003A (ja) シーン記述編集装置及びプログラム

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20131003

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

A4 Supplementary search report drawn up and despatched

Effective date: 20140617

RIC1 Information provided on ipc code assigned before grant

Ipc: G06K 9/60 20060101AFI20140611BHEP

Ipc: G08B 13/196 20060101ALI20140611BHEP

17Q First examination report despatched

Effective date: 20150630

DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20170103

GRAJ Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted

Free format text: ORIGINAL CODE: EPIDOSDIGR1

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20170310

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 917570

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170815

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 6

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602012035854

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20170809

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 917570

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171109

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171209

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171109

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171110

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602012035854

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20180511

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171031

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171031

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171011

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20171031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171031

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 7

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171011

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171011

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20121011

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602012035854

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G06K0009600000

Ipc: G06V0030200000

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230524

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240904

Year of fee payment: 13

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20250904

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20250908

Year of fee payment: 14