EP2741237B1 - Method, apparatus and system for implementing video occlusion - Google Patents

Method, apparatus and system for implementing video occlusion Download PDF

Info

Publication number
EP2741237B1
EP2741237B1 EP12872312.9A EP12872312A EP2741237B1 EP 2741237 B1 EP2741237 B1 EP 2741237B1 EP 12872312 A EP12872312 A EP 12872312A EP 2741237 B1 EP2741237 B1 EP 2741237B1
Authority
EP
European Patent Office
Prior art keywords
video data
masked
masked area
video
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP12872312.9A
Other languages
German (de)
French (fr)
Other versions
EP2741237A4 (en
EP2741237A1 (en
Inventor
Duanling SONG
Feng Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP2741237A1 publication Critical patent/EP2741237A1/en
Publication of EP2741237A4 publication Critical patent/EP2741237A4/en
Application granted granted Critical
Publication of EP2741237B1 publication Critical patent/EP2741237B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19686Interfaces masking personal details for privacy, e.g. blurring faces, vehicle license plates

Definitions

  • Embodiments of the present invention relate to the field of video surveillance, and in particular, to a method, an apparatus, and a system for implementing video mask.
  • encryption processing is performed for image data of a masked part in a video, and the processed video is sent to a monitoring terminal.
  • a user with permission is capable of decrypting the image data of the masked part in the received video to see the complete video, while a user with no permission cannot see the image of the masked part.
  • a terminal of the user with no permission is also capable of acquiring the image data of the masked part, and if an abnormal means is used to decrypt the data of the part, the image of the masked part can be seen. This causes a security risk.
  • the document, WO 2006070249A1 relates to a video surveillance system which addresses the issue of privacy rights and scrambles regions of interest in a scene in a video scene to protect the privacy of human faces and objects captured by the system.
  • the video surveillance system is configured to identify persons and or objects captured in a region of interest of a video scene by various techniques, such as detecting changes in a scene or by face detection.
  • the document, US 20100149330A1 relates to a system and method for operator-side privacy zone masking of surveillance.
  • the system includes a video surveillance camera equipped with a coordinate engine for determining coordinates of a current field of view of the surveillance camera; and a frame encoder for embedding the determined coordinates with video frames of the current field of view.
  • Embodiments of the present invention provide a method, an apparatus, and a system for implementing video mask, so as to solve a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • the peripheral unit 110 is configured to collect video data and send the collected video data to the monitoring platform through the transmission network.
  • the peripheral unit 110 may generate, according to set description information of a masked area, non-masked video data corresponding to a non-masked area and masked video data corresponding to the masked area and separately transmit them to the monitoring platform.
  • Presentation forms of hardware of the peripheral unit 110 may be all types of camera devices, for example, webcams such as a dome camera, a box camera, and a semi-dome camera, and for another example, an analog camera and an encoder.
  • the monitoring platform 120 is configured to receive the masked video data and the non-masked video data that are sent by the peripheral unit 110, or obtain masked video data and non-masked video data by separating complete video data received from the peripheral unit 110, and send corresponding video data to the monitoring terminal 130 according to permission of a user of the monitoring terminal. For a user that has permission to acquire the masked video data, the monitoring platform 120 may send the masked video data and the non-masked video data to the monitoring terminal for merging and playing; alternatively, the monitoring platform 120 may merge the masked video data and the non-masked video data and send them to the monitoring terminal for playing.
  • the monitoring terminal 130 is configured to receive the video data sent by the monitoring platform, and if the received video data includes the non-masked video data and the masked video data, further configured to merge and play the masked video data and the non-masked video data.
  • FIG. 2 is a schematic flowchart of a method for implementing video mask according to a first embodiment of the present invention.
  • Step 210 Receive a video request sent by a first monitoring terminal, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the masked video data and the non-masked video data may specifically be encoded by using an H.264 format.
  • the device identifier is used to uniquely identify the peripheral unit, and specifically, it may include an identifier of a camera of the peripheral unit, and may further include an identifier of a cloud mirror of the peripheral unit.
  • Step 220 Determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the masked area may specifically include one or more areas, where the area may be rectangular, circular, polygonal, and the like. If one area is included, the masked video data corresponding to the masked area may specifically include one channel of video data. If multiple areas are included, the masked video data corresponding to the masked area may specifically include one channel of video data, or may include multiple channels of video data, for example, each area included in the masked area corresponds to one channel of video data.
  • description information of the masked area may be used to describe the masked area.
  • the description information of the masked area specifically includes a coordinate of the masked area.
  • the description information of the masked area may include coordinates of at least three vertexes of the rectangle, or may only include a coordinate of one vertex of the rectangle and a width and a height of the rectangle, for example (x, y, w, h), where x is the horizontal coordinate of the upper left vertex, y is the vertical coordinate of the upper left vertex, w is the width, and h is the height.
  • overall permission control may be performed for the masked video data, that is, permission to access the masked video data is classified into two permission levels: having access permission and having no access permission. In this case, it can be directly determined whether a user has permission to access the masked video data.
  • the first masked video data is the masked video data
  • the first masked area is the masked area (that is, the whole area of the masked area is included).
  • area-based permission control may also be performed for the masked video data. Respective permission is set for different areas, that is, video data that corresponds to different areas may correspond to different permission.
  • the masked area includes three areas, area 1 and area 2 correspond to permission A, and area 3 corresponds to permission B.
  • the masked area includes three areas, area 1 corresponds to permission A, area 2 corresponds to permission B, and area 3 corresponds to permission C. In this case, it needs to determine whether the user has permission to access masked video data that corresponds to a specific area.
  • the permission may be determined according to a password. For example, if a password that is received from the first monitoring terminal and used to acquire the first masked video data is determined to be correct (that is, a user inputs a correct password), it is determined that the user has the permission to acquire the first masked video data.
  • the permission may be further determined according to a user identifier of a user of the first monitoring terminal.
  • a user identifier may be preconfigured, and if the user identifier matches the authorized user identifier, it is determined that the user has the permission to acquire the first masked video data; an authorized account type may also be preconfigured, and if an account type corresponding to the user identifier matches the authorized account type, it is determined that the user has the permission to acquire the first masked video data.
  • the user identifier may be acquired during login of the user performed by using the monitoring terminal.
  • the video request received in step 210 may carry the user identifier, and in this case, the user identifier carried in the video request may be acquired.
  • step 230A If a determined result is yes, perform step 230A. If the determined result is no, perform step 230B.
  • Step 230A Acquire the first masked video data and the non-masked video data; and send the first masked video data and the non-masked video data to the first monitoring terminal, so that the first monitoring terminal merges and plays the first masked video data and the non-masked video data, or merge the first masked video data and the non-masked video data and send the merged video data to the first monitoring terminal.
  • a data type of the masked video data may also be sent to the first monitoring terminal, so that the first monitoring terminal identifies the masked video data from the received video data.
  • the data type may specifically be included in an acquiring address (for example, a URL) that is sent to the first monitoring terminal and used to acquire the masked video data, or the data type may be included in a message that is sent to the first monitoring terminal and used to carry the acquiring address, or the data type may be sent in a process of establishing a media channel between the first monitoring terminal and a monitoring platform, where the media channel is used to transmit the masked video data.
  • the method may further include: sending description information of the first masked area to the first monitoring terminal, so that the first monitoring terminal merges and plays, according to the description information of the first masked area, the first masked video data and the non-masked video data that are received in step 230A.
  • the description information may be included in the acquiring address (for example, a URL) that is sent to the first monitoring terminal and used to acquire the masked video data, or the description information may be included in the message that is sent to the first monitoring terminal and used to carry the acquiring address, or the description information may be sent in the process of establishing the media channel used to transmit the masked video data.
  • Step 230B Acquire the non-masked video data and send it to the first monitoring terminal.
  • step 230A and step 230B are as follows:
  • the acquiring address (for example, a URL) sent to the first monitoring terminal carries a data type.
  • the data type is used to indicate that the video data that can be acquired according to the acquiring address is the non-masked video data or the masked video data. Examples of a format of a URL (universal resource locator, Universal Resource Locator) that carries the data type are as follows:
  • description information for example, a coordinate of a masked area
  • description information of the masked area corresponding to the masked video data may be further carried in the acquiring address of the masked video data.
  • Examples of a format of a URL that carries the data type and the description information of the masked area are as follows:
  • the monitoring platform may further send the data type and/or the description information of the masked area to the first monitoring terminal by message exchange.
  • the data type and/or the description information of the masked area is included in a message body of an XML structure in a message that carries the URL, as shown in the following:
  • a user-defined structure body in an RTSP ANNOUNCE message may also be used to carry the data type and/or the description information of the masked area in the process of establishing the media channel between the first monitoring terminal and the monitoring platform.
  • An example is shown as follows:
  • step 230A the acquiring the first masked video data and the non-masked video data, merging the first masked video data and the non-masked video data, and sending the merged video data to the first monitoring terminal specifically includes: generating an acquiring address (for example, a URL) used to acquire the merged video data and sending it to the first monitoring terminal, receiving a request that is sent by the first monitoring terminal and includes the acquiring address, establishing, with the first monitoring terminal according to the acquiring address, a media channel used to send the merged video data, acquiring and merging the first masked video data and the non-masked video data, and sending the merged video data to the first monitoring terminal through the media channel.
  • an acquiring address for example, a URL
  • Step 230B may include: generating an acquiring address of the non-masked video data and sending it to the first monitoring terminal, receiving a request that is sent by the first monitoring terminal and includes the acquiring address, establishing, with the first monitoring terminal according to the acquiring address, a media channel used to send the non-masked video data, acquiring the non-masked video data according to the acquiring address of the non-masked video data and sending the non-masked video data through the media channel.
  • a CU (Client Unit, client unit) in this implementation manner is client software installed on a monitoring terminal and provides monitoring personnel with functions such as real-time video surveillance, video query and playback, and a cloud mirror operation.
  • a monitoring platform includes an SCU (Service Control Unit, service control unit) and an MU (Media Unit, media unit).
  • SCU Service Control Unit, service control unit
  • MU Media Unit, media unit
  • the SCU and the MU may be implemented in a same universal server or dedicated server, or may be separately implemented in different universal servers or dedicated servers.
  • Step 301 A CU sends a video request to an SCU of a monitoring platform, where the video request includes a device identifier and is used to request video data of a peripheral unit identified by the device identifier, and the video data includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • Step 302 The SCU determines whether a user of the CU has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • step 302 is the same as that of step 220, and therefore no further details are provided herein.
  • steps 303A-312A are performed. In this implementation manner, it is assumed that the first masked video data includes one channel of video data.
  • steps 303B-308B are performed.
  • Steps 303A-306A The SCU requests a URL of the first masked video data and a URL of the non-masked video data from an MU, and the MU generates the URL of the first masked video data and the URL of the non-masked video data and returns them to the SCU.
  • Step 307A The SCU returns the URL of the first masked video data and the URL of the non-masked video data to the CU.
  • Steps 308A-309A The CU requests the first masked video data from the MU according to the URL of the first masked video data, establishes, with the MU, a media channel used to transmit the first masked video data, and receives, through the media channel, the first masked video data sent by the MU.
  • Steps 310A-311A The CU requests the non-masked video data from the MU according to the URL of the non-masked video data, establishes, with the MU, a media channel used to transmit the non-masked video data, and receives, through the media channel, the non-masked video data sent by the MU.
  • Step 312A The CU merges and plays the first masked video data and the non-masked video data.
  • Steps 303B-304B The SCU requests a URL of the non-masked video data from the MU, and the MU generates the URL of the non-masked video data and returns it to the SCU.
  • Step 305B The SCU returns the URL of the non-masked video data to the CU.
  • Steps 306B-307B The CU requests the non-masked video data from the MU according to the URL of the non-masked video data, establishes, with the MU, a media channel used to transmit the non-masked video data, and receives, through the media channel, the non-masked video data sent by the MU.
  • Step 308B The CU plays the non-masked video data.
  • a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • the first embodiment of the present invention not only can be used in a real-time video surveillance scenario, but also can be used in a video view scenario (for example, video playback and video downloading). If the first embodiment is used in the video view scenario, the acquiring non-masked video data in steps 230A and 230B is specifically reading the non-masked video data from a non-masked video file, and the acquiring masked video data in step 230A is specifically reading masked video data from a masked video file.
  • step 210 the following operations are performed:
  • the establishing an association between the masked video file and the non-masked video file specifically includes: recording a non-masked video index and a masked video index, and establishing an association between the non-masked video index and the masked video index, where the non-masked video index includes a device identifier of the peripheral unit, video start time and end time, indication information of the non-masked video data, and an identifier of the non-masked video file (for example, a storage address of the non-masked video file, which may specifically be an absolute path of the non-masked video file), and the indication information of the non-masked video data is used to indicate that the non-masked video index is an index of the non-masked video file; and the masked video index includes indication information of the masked video data and an identifier of the masked video file (for example, a storage address of the masked video file, which may specifically be an absolute path of the masked video file), and the indication information of the masked video data is
  • both the non-masked video index and the masked video index may include indication information of a non-independent index, where the indication information of the non-independent index is used to indicate an index associated with the index.
  • the indication information of the non-independent index of the non-masked video index is used to indicate a masked video index associated with the non-masked video index.
  • the non-masked video index and/or the masked video index may further include description information of a masked area, or information (for example, a storage address of the description information of the masked area) used to acquire the description information of the masked area.
  • the establishing an association between the non-masked video index and the masked video index may specifically include recording an identifier (for example, an index number) of the masked video index into the non-masked video index, or may further include recording an identifier (for example, an index number) of the non-masked video index into the masked video index, or may further include recording an association between the identifier of the masked video index and the identifier of the non-masked video index. It should be noted that if the masked video data includes multiple channels of video data, a masked video index may be established for each channel of video data, and an association is established between the non-masked video index and each masked video index.
  • description information of a masked area corresponding to the video file, or information used to acquire the description information of the masked area corresponding to the video file is recorded in each masked video index.
  • the video request sent in step 210 may further include view time.
  • the acquiring the non-masked video data is specifically acquiring video data corresponding to the view time from the non-masked video file, and may specifically include: acquiring the non-masked video index according to the identifier of the peripheral unit, the view time, and the indication information of the non-masked video data, acquiring the non-masked video file according to the identifier of the non-masked video file in the non-masked video index, and acquiring the non-masked video data corresponding to the view time from the non-masked video file.
  • the acquiring the masked video data is specifically acquiring, according to the association between the masked video file and the non-masked video file, one or more video files that are associated with the non-masked video file and correspond to the first masked area and acquiring the video data corresponding to the view time from the one or more video files corresponding to the first masked area, and specifically includes: acquiring, according to the association between the non-masked video index and the masked video index (for example, according to the identifier of the masked video index in the non-masked video index), the masked video index associated with the non-masked video index, acquiring, according to the identifier of the masked video file included in the masked video index, one or more video files corresponding to the first masked area, and acquiring the video data corresponding to the view time from the one or more video files corresponding to the first masked area.
  • the masked video index associated with the non-masked video index may be further determined according to the indication information of the non-independent index in the non-masked video index, so as to improve efficiency of the monitoring platform in retrieving the masked video index.
  • an acquiring address used to acquire the non-masked video data may be generated according to the non-masked video index and sent to a first monitoring terminal, a request that is sent by the first monitoring terminal and includes the acquiring address of the non-masked video data is received, a media channel used to send the non-masked video data is established with the first monitoring terminal according to the acquiring address of the non-masked video data, the non-masked video data is acquired according to the acquiring address of the non-masked video data, and the non-masked video data is sent through the media channel. For example, as shown in FIG.
  • the SCU of the monitoring platform acquires the non-masked video index after receiving the video request, requests, from the MU according to the non-masked video index, a URL used to acquire the non-masked video data corresponding to the non-masked video index, and sends the URL to the CU.
  • the MU receives the request that is sent by the CU and includes the URL, establishes, with the CU according to the URL, a media channel used to send the non-masked video data, reads the non-masked video data in the video file according to the URL, and sends the non-masked video data to the CU through the media channel.
  • a process of sending the masked video data after the masked video index is acquired is similar to a process of sending the non-masked video data after the non-masked video index is acquired, and therefore no further details are provided herein.
  • the method may further include: sending description information of the first masked area to the first monitoring terminal, so that the first monitoring terminal merges and plays, according to the description information of the first masked area, the first masked video data and the non-masked video data that are received in step 230A.
  • the method may specifically include: acquiring the non-masked video index or description information of a masked area that is included in a masked video index corresponding to the first masked video data, or acquiring the description information of the first masked area according to the non-masked video index or information that is included in the masked video index and used to acquire the description information of the first masked area, and sending the acquired description information of the first masked area to the first monitoring terminal.
  • the description information of the first masked area may be carried in a message that is sent to the first monitoring terminal and carries an acquiring address of the first masked video data.
  • the method further includes receiving a masked area setting request sent by a second monitoring terminal, where the masked area setting request includes a device identifier of the peripheral unit and the description information of the masked area.
  • the description information of the masked area may be sent to the peripheral unit, and the non-masked video data and the masked video data that are sent by the peripheral unit and generated according to the description information of the masked area are received; or the masked video data and the non-masked video data may be obtained by separating, according to the description information of the masked area, complete video data received from the peripheral unit.
  • the masked video data and the non-masked video data may be sent to the first monitoring terminal and be merged and played by the first monitoring terminal, or the masked video data and the non-masked video data may be merged and then sent to the first monitoring terminal.
  • first monitoring terminal and the second monitoring terminal may be a same monitoring terminal.
  • an entity generating the non-masked video data and the masked video data may be a peripheral unit or a monitoring platform
  • an entity merging the non-masked video data and the masked video data may be a monitoring platform or a monitoring terminal (that is, the first monitoring terminal in the first embodiment of the present invention).
  • a first exemplary implementation manner is as follows: As shown in FIG. 4 , the peripheral unit generates the non-masked video data and the masked video data, the monitoring platform separately sends the monitoring terminal (for example, the first monitoring terminal in this embodiment) the non-masked video data and the masked video data (for example, the first masked video data in this embodiment) that a user has permission to acquire, and the monitoring terminal merges and plays the received video data.
  • the monitoring terminal for example, the first monitoring terminal in this embodiment
  • the masked video data for example, the first masked video data in this embodiment
  • Step 401 A second monitoring terminal sends a masked area setting request to a monitoring platform, where the masked area setting request includes a device identifier and description information of a masked area.
  • the masked area may specifically include one or more areas, where the area may be rectangular, circular, polygonal, and the like.
  • the description information of the masked area specifically includes a coordinate of the masked area.
  • the description information of the masked area may include coordinates of at least three vertexes of the rectangle, or may only include a coordinate of one vertex of the rectangle and a width and a height of the rectangle, for example (x, y, w, h), where x is the horizontal coordinate of the upper left vertex, y is the vertical coordinate of the upper left vertex, w is the width, and h is the height.
  • Step 402 The monitoring platform sends the masked area setting request to a peripheral unit identified by the device identifier, where the masked area setting request includes the description information of the masked area.
  • Step 403 The peripheral unit encodes a captured video picture to generate masked video data and non-masked video data.
  • the peripheral unit encodes the captured video picture into the non-masked video data corresponding to a non-masked area and the masked video data corresponding to the masked area. If the masked area includes one area, a video picture corresponding to the masked area may be encoded into one channel of video data, that is, the masked video data includes one channel of video data.
  • video pictures corresponding to the multiple areas included in the masked area may be encoded into one channel of video data, that is, the masked video data includes one channel of video data; or video pictures corresponding to the multiple areas included in the masked area may be encoded into one channel of video data each, that is, the masked video data includes multiple channels of video data and each area corresponds to one channel of video data; or video pictures corresponding to areas with same permission among the multiple areas included in the masked area may be encoded into one channel of video data, that is, the areas corresponding to the same permission correspond to a same channel of video data, for example, if the masked area includes three areas, area 1 and area 2 correspond to same permission, and area 3 corresponds to another permission, video pictures corresponding to area 1 and area 2 are encoded into a same channel of video data, and a video picture corresponding to area 3 is encoded into another channel of video data.
  • the video picture corresponding to the masked area may be directly encoded into the masked video data, that is, a video data frame of the masked video data includes only pixel data of the video picture corresponding to the masked area; or a video picture in the whole captured video picture may be encoded after filling the video picture by using a set pixel value so as to generate the masked video data, where the video picture corresponds to the non-masked area, that is, a video data frame of the non-masked video data includes both pixel data of the video picture corresponding to the masked area and filled pixel data.
  • Encoding formats include but are not limited to H.264, MPEG4, and MJPEG, and the like.
  • the video picture corresponding to the non-masked area may be directly encoded into the non-masked video data, or the video picture in the whole captured video picture may be encoded after filling the video picture by using a set pixel value so as to generate the non-masked video data, where the video picture corresponds to the masked area, and the set pixel value is preferably RGB (0, 0, 0).
  • timestamps of video data frames corresponding to a same complete video picture are kept completely consistent in the masked video data and the non-masked video data.
  • the description information of the masked area is sent by the monitoring platform to the peripheral unit.
  • the description information of the masked area may be preset on the peripheral unit.
  • Step 404 Send the generated masked video data and non-masked video data to the monitoring platform.
  • the peripheral unit may further send a data type of the masked video data to the monitoring platform, so that the monitoring platform identifies the masked video data from received video data.
  • the data type may be specifically included in an acquiring address (for example, a URL) that is sent to the monitoring platform and used to acquire the masked video data (where the monitoring platform may acquire the masked video data from the peripheral unit by using the acquiring address), or the data type may be included in a message that is sent to the monitoring platform and used to carry the acquiring address, or the data type may be sent in a process of establishing a media channel between the monitoring platform and the peripheral unit and used to transmit the masked video data.
  • an acquiring address for example, a URL
  • the data type may be included in a message that is sent to the monitoring platform and used to carry the acquiring address, or the data type may be sent in a process of establishing a media channel between the monitoring platform and the peripheral unit and used to transmit the masked video data.
  • Step 405 A first monitoring terminal sends a video request to the monitoring platform, where the video request includes the device identifier of the peripheral unit.
  • Step 406 Determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • steps 407A-409A are performed.
  • steps 407B-408B are performed.
  • Step 407A The monitoring platform sends description information of the first masked area to the first monitoring terminal.
  • Step 408A The monitoring platform sends the first masked video data and the non-masked video data to the first monitoring terminal.
  • Step 409A The first monitoring terminal merges and plays the received first masked video data and non-masked video data.
  • the received first masked video data and non-masked video data are merged and played according to the description information of the first masked area.
  • the first masked video data includes one channel of video data
  • the first masked video data is decoded to obtain a masked video data frame
  • the non-masked video data is decoded to obtain a non-masked video data frame
  • pixel data in the masked video data frame is extracted
  • the extracted pixel data is added, according to the description information of the first masked area, to a pixel area in a non-masked video data frame that has a same timestamp as the masked video data frame so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and the complete video data frame is played.
  • the extracting the pixel data in the masked video data frame is specifically extracting all pixel data in the masked video data frame.
  • a video picture in the whole captured video picture is encoded after filling the video picture by using a set pixel value so as to generate the masked video data during the encoding, where the video picture corresponds to the non-masked area, that is, a video data frame of the non-masked video data includes both the pixel data of the video picture corresponding to the masked area and the filled pixel data, pixel data of the pixel area in the masked video data frame is extracted according to the description information of the first masked area, where the pixel area corresponds to the first masked area.
  • the first masked video data includes multiple channels of video data
  • each channel of video data in the first masked video data is decoded to obtain a masked video data frame of the channel of video data
  • the non-masked video data is decoded to obtain a non-masked video data frame
  • pixel data in masked video data frames of all channels of video data is extracted, where the masked video data frames have a same timestamp
  • the extracted pixel data is added to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and the complete video data frame is played.
  • both the non-masked video data and the first masked video data are transmitted to the first monitoring terminal through the RTP protocol.
  • the first monitoring terminal receives a non-masked video data code stream and a first masked video data code stream that are encapsulated through the RTP protocol, parses the non-masked video data code stream and the first masked video data code stream to obtain the non-masked video data and the first masked video data respectively, and separately caches the non-masked video data and the first masked video data in a decoder buffer area.
  • Frame data is synchronized according to a synchronization timestamp, that is, frame data that has a same timestamp is separately extracted from the non-masked video data and the first masked video data.
  • the extracted frame data of the non-masked video data and the extracted frame data of the first masked video data that have the same timestamp are separately encoded to generate corresponding YUV data.
  • YUV data of the first masked video data and YUV data of the non-masked video data are merged according to the description information of the first masked area, and the merged YUV data is rendered and played.
  • a request for acquiring video data is sent to the peripheral unit after step 404.
  • video data that a user of the first monitoring terminal has permission to acquire may be requested from the peripheral unit according to the determined result in step 406. For example, if the user only has permission to acquire the non-masked video data, only the non-masked video data is requested; and if the user has permission to acquire the non-masked video data and the first masked video data, the non-masked video data and the first masked video data is requested.
  • the peripheral unit After receiving the request, the peripheral unit generates the requested video data and returns it to the monitoring platform.
  • a method used by the peripheral unit to generate the non-masked video data and the first masked video data is the same as that in step 403, and therefore no further details are provided.
  • Step 407B The monitoring platform forwards the non-masked video data to the first monitoring terminal.
  • Step 408B The first monitoring terminal plays the received non-masked video data.
  • a second exemplary implementation manner 2 is as follows: As shown in FIG. 7 , the peripheral unit generates the non-masked video data and the masked video data, and the monitoring platform merges the non-masked video data and the masked video data (that is, the first masked video data) that a user has permission to acquire and then sends them to the monitoring terminal.
  • Steps 501-506 are the same as steps 401-406, and therefore no further details are provided.
  • steps 507A-510A are performed.
  • steps 507B-508B are performed.
  • Step 507A is the same as step 407A.
  • Step 508A The monitoring platform merges the non-masked video data and the first masked video data.
  • the first masked video data and the non-masked video data are merged according to the description information of the masked area received in step 501.
  • the first masked video data includes one channel of video data
  • the first masked video data is decoded to obtain a masked video data frame
  • the non-masked video data is decoded to obtain a non-masked video data frame
  • pixel data in the masked video data frame is extracted
  • the extracted pixel data is added to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the masked area
  • the complete video data frame is encoded to obtain the merged video data.
  • the extracting the pixel data in the masked video data frame is specifically extracting all pixel data in the masked video data frame.
  • a video picture in the whole captured video picture is encoded after filling the video picture by using a set pixel value so as to generate the first masked video data during the encoding, where the video picture corresponds to the non-masked area, that is, a video data frame of the non-masked video data includes both the pixel data of the video picture corresponding to the masked area and the filled pixel data, pixel data of a pixel area in the masked video data frame is extracted, where the pixel area corresponds to the first masked area.
  • the first masked video data includes multiple channels of video data
  • each channel of video data in the first masked video data is decoded to obtain a masked video data frame of the channel of video data
  • the non-masked video data is decoded to obtain a non-masked video data frame
  • pixel data in masked video data frames of all channels of video data is extracted, where the masked video data frames have a same timestamp
  • the extracted pixel data is added to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the masked area
  • the complete video data frame is encoded to obtain the merged video data.
  • both the non-masked video data and the first masked video data are transmitted to the monitoring platform through the RTP protocol.
  • Processing after the monitoring platform receives a non-masked video data code stream and a first masked video data code stream that are encapsulated through the RTP protocol is similar to the processing after the first monitoring terminal receives a code stream in step 409A.
  • a difference lies only in that the first monitoring terminal renders and plays YUV data after merging the YUV data, while the monitoring platform encodes merged YUV data after merging the YUV data, so as to generate the merged video data.
  • Step 509A Send the merged video data to the first monitoring terminal.
  • Step 510A The first monitoring terminal directly decodes and plays the merged video data.
  • Steps 507B-508B are the same as steps 407B-408B.
  • a third exemplary implementation manner is as follows: As shown in FIG. 10 , the peripheral unit generates complete video data, the monitoring platform obtains the masked video data and the non-masked video data by separating the complete video data received from the peripheral unit, and separately sends the monitoring terminal the non-masked video data and the masked video data that a user has permission to acquire, and the monitoring terminal merges and plays the received masked video data and non-masked video data.
  • Step 601 is the same as step 401, and therefore no further details are provided.
  • Step 602 The peripheral unit encodes a captured video picture into complete video data and sends the complete video data to the monitoring platform.
  • Step 603 The monitoring platform obtains the masked video data corresponding to a masked area and the non-masked video data corresponding to a non-masked area by separating, according to the description information of a masked area received in step 601, the complete video data.
  • a video picture in the complete video data may be encoded into one channel of video data, that is, the masked video data includes one channel of video data, where the video picture corresponds to the masked area.
  • video pictures in the complete video data that correspond to the multiple areas included in the masked area may be encoded into one channel of video data, that is, the masked video data includes one channel of video data; or video pictures in the complete video data that correspond to the multiple areas included in the masked area may be encoded into one channel of video data each, that is, the masked video data includes multiple channels of video data and each area corresponds to one channel of video data; or video pictures corresponding to areas with same permission among the multiple areas included in the masked area may be encoded into one channel of video data, that is, the areas corresponding to the same permission correspond to a same channel of video data, for example, if the masked area includes three areas, area 1 and area 2 correspond to same permission, and area 3 corresponds to another permission, video pictures corresponding to area 1 and area 2 are encoded into a same channel of video data, and a video picture corresponding to area 3 is encoded into another channel of video data.
  • the video picture corresponding to the masked area may be directly encoded into the masked video data. This includes: decoding the complete video data to obtain a complete video data frame and extracting pixel data of the video picture in the complete video data frame to generate a video data frame of the masked video data, where the video picture corresponds to the masked area.
  • a video picture in the whole captured video picture may also be encoded after filling the video picture by using a set pixel value so as to generate the masked video data, where the video picture corresponds to the non-masked area.
  • the obtaining the non-masked video data corresponding to a non-masked area may specifically be directly encoding the video picture corresponding to the non-masked area into the non-masked video data, which includes decoding the complete video data to obtain a complete video data frame and extracting pixel data of the video picture in the complete video data frame to generate the video data frame of the non-masked video data, where the video picture corresponds to the non-masked area; or may specifically be encoding the video picture in the whole video picture after filing the video picture by using a set pixel value so as to generate the non-masked video data, where the video picture corresponds to the masked area, which includes: decoding the complete video data to obtain a complete video data frame and setting a pixel value of a pixel of a pixel area in the complete video data frame as the set pixel value, where the pixel area corresponds to the masked area, and the set pixel value is preferably RGB (0, 0, 0).
  • Encoding formats include but are not limited to H.264, MPEG4, and MJPEG.
  • Steps 604-605 are the same as steps 405-406.
  • steps 606A-608A are performed.
  • steps 606B-607B are performed.
  • Steps 606A-608A are the same as steps 407A-409A.
  • Steps 606B-607B are the same as steps 407B-408B.
  • a second embodiment of the present invention provides a monitoring platform 500.
  • the monitoring platform includes a video request receiving unit 501, a determining unit 502, an acquiring unit 503, and a video data sending unit 504.
  • the video request receiving unit 501 is configured to receive a video request sent by a first monitoring terminal, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the determining unit 502 is configured to determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the acquiring unit 503 is configured to acquire the non-masked video data and configured to acquire the first masked video data when a determined result of the determining unit 502 is yes.
  • the video data sending unit 504 is configured to: when the determined result of the determining unit 502 is yes, send the first monitoring terminal the first masked video data and the non-masked video data that are acquired by the acquiring unit 503, so that the first monitoring terminal merges and plays the first masked video data and the non-masked video data, or merge the first masked video data and the non-masked video data that are acquired by the acquiring unit 503 to obtain merged video data, and send the merged video data to the first monitoring terminal; and further configured to: when the determined result of the determining unit 502 is no, send the first monitoring terminal the non-masked video data acquired by the acquiring unit 503.
  • the monitoring platform further includes a setting request receiving unit 505.
  • the setting request receiving unit 505 is configured to receive a masked area setting request sent by a second monitoring terminal, where the masked area setting request includes the device identifier of the peripheral unit and description information of the masked area.
  • the monitoring platform further includes a description information sending unit 506 and a first video data receiving unit 507.
  • the description information sending unit 506 is configured to send the description information of the masked area to the peripheral unit; and the first video data receiving unit 507 is configured to receive the non-masked video data and the masked video data that are sent by the peripheral unit and generated according to the description information of the masked area.
  • the monitoring platform further includes a second video data receiving unit 508 and a video data separating unit 509.
  • the second video data receiving unit 508 is configured to receive complete video data sent by the peripheral unit; and the video data separating unit 509 is configured to obtain the masked video data and the non-masked video data by separating the complete video data received by the second video data receiving unit.
  • the monitoring platform further includes a storing unit and an association establishing unit.
  • the storing unit is configured to store the masked video data into a masked video file and store the non-masked video data into a non-masked video file, and the masked video file includes one or more video files.
  • the association establishing unit is configured to establish an association between the masked video file and the non-masked video file.
  • the video request receiving unit 501 is specifically configured to receive a video request that includes view time and is sent by the first monitoring terminal.
  • the acquiring unit 503 is specifically configured to acquire video data corresponding to the view time from the non-masked video file, and further specifically configured to acquire, according to the association established by the association establishing unit, one or more video files that correspond to the first masked area and are associated with the non-masked video file and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area when the determined result of the determining unit 502 is yes.
  • the association establishing unit is specifically configured to record a non-masked video index and a masked video index and establish an association between the non-masked video index and the masked video index, where the non-masked video index includes the device identifier of the peripheral unit, video start time and end time, indication information of the non-masked video data, and an identifier of the non-masked video file, and the masked video index includes indication information of the masked video data and an identifier of the masked video file.
  • the acquiring unit 503 is specifically configured to obtain, through matching, the non-masked video index according to the device identifier of the peripheral unit and the view time that are included in the video request and the indication information of the non-masked video data, the device identifier of the peripheral unit, and the video start time and end time that are included in the non-masked video index, acquire the non-masked video file according to the identifier of the non-masked video file included in the non-masked video index, and acquire the video data corresponding to the view time from the non-masked video file; and further specifically configured to acquire, when the determined result of the determining unit 502 is yes, the masked video index associated with the non-masked video index according to the association, acquire, according to the identifier of the masked video file included in the masked video index, one or more video files corresponding to the first masked area, and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area.
  • a functional unit described in the second embodiment of the present invention can be used to implement the method described in the first embodiment.
  • the video request receiving unit 501, the determining unit 502, the setting request receiving unit 505, and the description information sending unit 506 are located on an SCU of the monitoring platform, and the acquiring unit 503, the video data sending unit 504, the first video data receiving unit 507, the second video data receiving unit 508, and the video data separating unit 509 are located on an MU of the monitoring platform.
  • a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • a third embodiment of the present invention provides a monitoring terminal 600.
  • the monitoring terminal includes a video request sending unit 601, a video data receiving unit 602, and a playing unit 603.
  • the video request sending unit 601 is configured to send a video request to a monitoring platform, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the video data receiving unit 602 is configured to receive first masked video data and the non-masked video data that are sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area; and further configured to receive the non-masked video data that is sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data in the masked video data.
  • the playing unit is configured to merge and play the first masked video data and the non-masked video data that are received by the video data receiving unit 602, or configured to play the non-masked video data received by the video data receiving unit 602.
  • the playing unit is specifically configured to decode the first masked video data to obtain a masked video data frame, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in the masked video data frame, add, according to description information of the first masked area, the extracted pixel data to a pixel area in a non-masked video data frame that has a same timestamp as the masked video data frame so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • the playing unit is specifically configured to decode each channel of video data in the first masked video data to obtain a masked video data frame of the channel of video data, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in masked video data frames of all channels of video data, where the masked video data frames have a same timestamp, add the extracted pixel data to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • a functional unit described in the third embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a fourth embodiment of the present invention provides a peripheral unit 700.
  • the peripheral unit includes a description information receiving unit 701, a video data encoding unit 702, and a video data sending unit 703.
  • the description information receiving unit 701 is configured to receive description information of a masked area, where the description information is sent by a monitoring platform.
  • the video data encoding unit 702 is configured to encode, according to the description information of the masked area, a captured video picture into non-masked video data corresponding to a non-masked area and masked video data corresponding to the masked area.
  • the video data sending unit 703 is configured to send the non-masked video data and the masked video data to the monitoring platform, so that the monitoring platform sends the non-masked video data and first masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data, or sends the non-masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the video data encoding unit 702 is specifically configured to: when the masked area includes one area, encode a video picture in the captured video picture into one channel of video data according to the description information of the masked area, where the video picture corresponds to the masked area; or when the masked area includes multiple areas, encode video pictures in the captured video picture into one channel of video data according to the description information of the masked area, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data each, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data, where the video pictures correspond to areas with same permission among the multiple areas included in the masked area; and further specifically configured to encode a video picture in the captured video picture into the non-masked video data according to the description information of the masked area, where the video picture corresponds to the non-masked area.
  • a functional unit described in the fourth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a fifth embodiment of the present invention provides a monitoring platform 1000, including:
  • the processor 1010, the communications interface 1020, and the memory 1030 complete communication between each other through the bus 1040.
  • the communications interface 1020 is configured to communicate with a network element, for example, communicate with a monitoring terminal or a peripheral unit.
  • the processor 1010 is configured to execute a program 1032.
  • the program 1032 may include a program code, and the program code includes a computer operation instruction.
  • the processor 1010 is configured to perform a computer program stored in the memory and may specifically be a central processing unit (CPU, central processing unit), which is a core unit of a computer.
  • CPU central processing unit
  • the memory 1030 is configured to store the program 1032.
  • the memory 1030 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • the program 1032 may specifically include a video request receiving unit 1032-1, a determining unit 1032-2, an acquiring unit 1032-3, and a video data sending unit 1032-4.
  • the video request receiving unit 1032-1 is configured to receive a video request sent by a first monitoring terminal, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the determining unit 1032-2 is configured to determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the acquiring unit 1032-3 is configured to acquire the non-masked video data and configured to acquire the first masked video data when a determined result of the determining unit 1032-2 is yes.
  • the video data sending unit 1032-4 is configured to: when the determined result of the determining unit 1032-2 is yes, send the first monitoring terminal the first masked video data and the non-masked video data that are acquired by the acquiring unit 1032-3, so that the first monitoring terminal merges and plays the first masked video data and the non-masked video data, or merge the first masked video data and the non-masked video data that are acquired by the acquiring unit 1032-3 to obtain merged video data, and send the merged video data to the first monitoring terminal; and further configured to: when the determined result of the determining unit 1032-2 is no, send the first monitoring terminal the non-masked video data acquired by the acquiring unit 1032-3.
  • the program further includes a setting request receiving unit 1032-5.
  • the setting request receiving unit 1032-5 is configured to receive a masked area setting request sent by a second monitoring terminal, where the masked area setting request includes the device identifier of the peripheral unit and description information of the masked area.
  • the monitoring platform further includes a description information sending unit 1032-6 and a first video data receiving unit 1032-7.
  • the description information sending unit 1032-6 is configured to send the description information of the masked area to the peripheral unit; and the first video data receiving unit 1032-7 is configured to receive the non-masked video data and the masked video data that are sent by the peripheral unit and generated according to the description information of the masked area.
  • the monitoring platform further includes a second video data receiving unit 1032-8 and a video data separating unit 1032-9.
  • the second video data receiving unit 1032-8 is configured to receive complete video data sent by the peripheral unit; and the video data separating unit 1032-9 is configured to obtain the masked video data and the non-masked video data by separating the complete video data received by the second video data receiving unit.
  • the program further includes a storing unit and an association establishing unit.
  • the storing unit is configured to store the masked video data into a masked video file and store the non-masked video data into a non-masked video file, and the masked video file includes one or more video files.
  • the association establishing unit is configured to establish an association between the masked video file and the non-masked video file.
  • the video request receiving unit 1032-1 is specifically configured to receive a video request that includes view time and is sent by the first monitoring terminal.
  • the acquiring unit 1032-3 is specifically configured to acquire video data corresponding to the view time from the non-masked video file, and further specifically configured to acquire, according to the association established by the association establishing unit, one or more video files that correspond to the first masked area and are associated with the non-masked video file and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area when the determined result of the determining unit 1032-2 is yes.
  • the association establishing unit is specifically configured to record a non-masked video index and a masked video index and establish an association between the non-masked video index and the masked video index, where the non-masked video index includes the device identifier of the peripheral unit, video start time and end time, indication information of the non-masked video data, and an identifier of the non-masked video file, and the masked video index includes indication information of the masked video data and an identifier of the masked video file.
  • the acquiring unit 1032-3 is specifically configured to obtain, through matching, the non-masked video index according to the device identifier of the peripheral unit and the view time that are included in the video request and the indication information of the non-masked video data, the device identifier of the peripheral unit, and the video start time and end time that are included in the non-masked video index, acquire the non-masked video file according to the identifier of the non-masked video file included in the non-masked video index, and acquire the video data corresponding to the view time from the non-masked video file; and further specifically configured to acquire, when the determined result of the determining unit 1032-2 is yes, the masked video index associated with the non-masked video index according to the association, acquire, according to the identifier of the masked video file included in the masked video index, one or more video files corresponding to the first masked area, and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area.
  • each unit in the program 1032 refers to a corresponding unit in the second embodiment of the present invention, and therefore no further details are provided herein.
  • a functional unit described in the fifth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • a sixth embodiment of the present invention provides a monitoring terminal 2000, including:
  • the processor 2010, the communications interface 2020, and the memory 2030 complete communication between each other through the bus 2040.
  • the communications interface 2020 is configured to communicate with a network element, for example, communicate with a monitoring platform.
  • the processor 2010 is configured to execute a program 2032.
  • the program 2032 may include a program code, and the program code includes a computer operation instruction.
  • the processor 2010 is configured to perform a computer program stored in the memory and may specifically be a central processing unit (CPU, central processing unit), which is a core unit of a computer.
  • CPU central processing unit
  • the memory 2030 is configured to store the program 2032.
  • the memory 2030 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • the program 2032 may specifically include a video request sending unit 2032-1, a video data receiving unit 2032-2, and a playing unit 2032-3.
  • the video request sending unit is configured to send a video request to a monitoring platform, the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the video data receiving unit 2032-2 is configured to receive first masked video data and the non-masked video data that are sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area; and further configured to receive the non-masked video data that is sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data in the masked video data.
  • the playing unit is configured to merge and play the first masked video data and the non-masked video data that are received by the video data receiving unit 2032-2, or configured to play the non-masked video data received by the video data receiving unit 2032-2.
  • the playing unit is specifically configured to decode the first masked video data to obtain a masked video data frame, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in the masked video data frame, add, according to description information of the first masked area, the extracted pixel data to a pixel area in a non-masked video data frame that has a same timestamp as the masked video data frame so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • the playing unit is specifically configured to decode each channel of video data in the first masked video data to obtain a masked video data frame of the channel of video data, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in masked video data frames of all channels of video data, where the masked video data frames have a same timestamp, add the extracted pixel data to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • each unit in the program 2032 refers to a corresponding unit in the third embodiment of the present invention, and therefore no further details are provided herein.
  • a functional unit described in the sixth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a seventh embodiment of the present invention provides a peripheral unit 3000, including:
  • the processor 3010, the communications interface 3020, and the memory 3030 complete communication between each other through the bus 3040.
  • the communications interface 3020 is configured to communicate with a network element, for example, communicate with a monitoring platform.
  • the processor 3010 is configured to execute a program 3032.
  • the program 3032 may include a program code, and the program code includes a computer operation instruction.
  • the processor 3010 is configured to perform a computer program stored in the memory, and may specifically be a central processing unit (CPU, central processing unit), which is a core unit of a computer.
  • CPU central processing unit
  • the memory 3030 is configured to store the program 3032.
  • the memory 3030 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • the program 3032 may specifically include a description information receiving unit 3032-1, a video data encoding unit 3032-2, and a video data sending unit 3032-3.
  • the description information receiving unit 3032-1 is configured to receive description information of a masked area, where the description information is sent by a monitoring platform;
  • the video data encoding unit 3032-2 is configured to encode, according to the description information of the masked area, a captured video picture into non-masked video data corresponding to a non-masked area and masked video data corresponding to the masked area.
  • the video data sending unit 3032-3 is configured to send the non-masked video data and the masked video data to the monitoring platform, so that the monitoring platform sends the non-masked video data and first masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data, or sends the non-masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the video data encoding unit 3032-2 is specifically configured to: when the masked area includes one area, encode a video picture in the captured video picture into one channel of video data according to the description information of the masked area, where the video picture corresponds to the masked area; or when the masked area includes multiple areas, encode video pictures in the captured video picture into one channel of video data according to the description information of the masked area, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data each, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data, where the video pictures correspond to areas with same permission among the multiple areas included in the masked area; and further specifically configured to encode a video picture in the captured video picture into the non-masked video data according to the description information of the masked area, where the video picture corresponds to the non-masked area.
  • each unit in the program 3032 refers to a corresponding unit in the fourth embodiment of the present invention, and therefore no further details are provided herein.
  • a functional unit described in the seventh embodiment of the present invention can be used to implement the method described in the first embodiment.
  • an eighth embodiment of the present invention provides a video surveillance system 4000.
  • the video surveillance system includes a monitoring terminal 4010 and a monitoring platform 4020.
  • the monitoring terminal 4010 is specifically the monitoring terminal according to the third or the sixth embodiment.
  • the monitoring platform 4020 is specifically the monitoring platform according to the second or the fifth embodiment.
  • the video surveillance system may further include a peripheral unit 4030, which is specifically the peripheral unit according to the fourth or the seventh embodiment.
  • a functional unit described in the eighth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the described apparatus embodiment is merely exemplary.
  • the unit division is merely logical function division and may be other division in actual implementation.
  • a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces.
  • the indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
  • the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one position, or may be distributed on a plurality of network units. A part or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • functional units in the embodiments of the present invention may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.
  • the functions When the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the present invention essentially, or the part contributing to the prior art, or part of the technical solutions may be implemented in a form of a software product.
  • the computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or a part of the steps of the methods described in the embodiment of the present invention.
  • the foregoing storage medium includes: any medium that can store a program code, such as a USB flash disk, a removable hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk, or an optical disk.
  • a program code such as a USB flash disk, a removable hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Description

    TECHNICAL FIELD
  • Embodiments of the present invention relate to the field of video surveillance, and in particular, to a method, an apparatus, and a system for implementing video mask.
  • BACKGROUND
  • In the field of video surveillance, a requirement of protecting personal privacy exists, and therefore mask processing needs to be performed for a video image of a part of an area shot by a camera, so that a common user sees a video picture that does not include the image of a masked part, and the image of the masked part can only be viewed by a user with advanced permission.
  • In the prior art, encryption processing is performed for image data of a masked part in a video, and the processed video is sent to a monitoring terminal. A user with permission is capable of decrypting the image data of the masked part in the received video to see the complete video, while a user with no permission cannot see the image of the masked part. However, in the prior art, a terminal of the user with no permission is also capable of acquiring the image data of the masked part, and if an abnormal means is used to decrypt the data of the part, the image of the masked part can be seen. This causes a security risk.
  • The document, WO 2006070249A1 , relates to a video surveillance system which addresses the issue of privacy rights and scrambles regions of interest in a scene in a video scene to protect the privacy of human faces and objects captured by the system. The video surveillance system is configured to identify persons and or objects captured in a region of interest of a video scene by various techniques, such as detecting changes in a scene or by face detection.
  • The document, US 20100149330A1 , relates to a system and method for operator-side privacy zone masking of surveillance. The system includes a video surveillance camera equipped with a coordinate engine for determining coordinates of a current field of view of the surveillance camera; and a frame encoder for embedding the determined coordinates with video frames of the current field of view.
  • SUMMARY
  • Embodiments of the present invention provide a method, an apparatus, and a system for implementing video mask, so as to solve a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • The present invention is defined in the attached claims.
  • BRIEF DESCRIPTION OF DRAWINGS
  • To illustrate the technical solutions in the embodiments of the present invention more clearly, the following briefly introduces the accompanying drawings required for describing the embodiments or the prior art. Apparently, the accompanying drawings in the following description show merely some embodiments of the present invention, and a person of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.
    • FIG. 1 is a schematic architecture diagram of a system according to an embodiment of the present invention;
    • FIG 2 is a schematic flowchart of a method according to a first embodiment of the present invention;
    • FIG. 3 is a schematic flowchart of a method of an optional implementation manner according to the first embodiment of the present invention;
    • FIG. 4 is a schematic diagram of a scenario of a first exemplary implementation manner according to the first embodiment of the present invention;
    • FIG. 5 is a schematic flowchart of a method of the first exemplary implementation manner according to the first embodiment of the present invention;
    • FIG. 6 is a schematic processing flowchart of a terminal of the first exemplary implementation manner according to the first embodiment of the present invention;
    • FIG. 7 is a schematic diagram of a scenario of a second exemplary implementation manner according to the first embodiment of the present invention;
    • FIG. 8 is a schematic flowchart of a method of the second exemplary implementation manner includes an access manner through a network cable or an access manner through an optical fiber, and the wireless access manner includes access manners such as WIFI (for example, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, and/or IEEE 802.1 In), GSM (EDGE), WCDMA, CDMA, and TD-SCDMA, Bluetooth, and LTE.
  • The peripheral unit 110 is configured to collect video data and send the collected video data to the monitoring platform through the transmission network. Preferably, the peripheral unit 110 may generate, according to set description information of a masked area, non-masked video data corresponding to a non-masked area and masked video data corresponding to the masked area and separately transmit them to the monitoring platform. Presentation forms of hardware of the peripheral unit 110 may be all types of camera devices, for example, webcams such as a dome camera, a box camera, and a semi-dome camera, and for another example, an analog camera and an encoder.
  • The monitoring platform 120 is configured to receive the masked video data and the non-masked video data that are sent by the peripheral unit 110, or obtain masked video data and non-masked video data by separating complete video data received from the peripheral unit 110, and send corresponding video data to the monitoring terminal 130 according to permission of a user of the monitoring terminal. For a user that has permission to acquire the masked video data, the monitoring platform 120 may send the masked video data and the non-masked video data to the monitoring terminal for merging and playing; alternatively, the monitoring platform 120 may merge the masked video data and the non-masked video data and send them to the monitoring terminal for playing.
  • The monitoring terminal 130 is configured to receive the video data sent by the monitoring platform, and if the received video data includes the non-masked video data and the masked video data, further configured to merge and play the masked video data and the non-masked video data.
  • FIG. 2 is a schematic flowchart of a method for implementing video mask according to a first embodiment of the present invention.
  • Step 210: Receive a video request sent by a first monitoring terminal, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • The masked video data and the non-masked video data may specifically be encoded by using an H.264 format.
  • The device identifier is used to uniquely identify the peripheral unit, and specifically, it may include an identifier of a camera of the peripheral unit, and may further include an identifier of a cloud mirror of the peripheral unit.
  • Step 220: Determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • The masked area may specifically include one or more areas, where the area may be rectangular, circular, polygonal, and the like. If one area is included, the masked video data corresponding to the masked area may specifically include one channel of video data. If multiple areas are included, the masked video data corresponding to the masked area may specifically include one channel of video data, or may include multiple channels of video data, for example, each area included in the masked area corresponds to one channel of video data.
  • Specifically, description information of the masked area may be used to describe the masked area. The description information of the masked area specifically includes a coordinate of the masked area. For example, when the masked area includes a rectangle, the description information of the masked area may include coordinates of at least three vertexes of the rectangle, or may only include a coordinate of one vertex of the rectangle and a width and a height of the rectangle, for example (x, y, w, h), where x is the horizontal coordinate of the upper left vertex, y is the vertical coordinate of the upper left vertex, w is the width, and h is the height.
  • Specifically, overall permission control may be performed for the masked video data, that is, permission to access the masked video data is classified into two permission levels: having access permission and having no access permission. In this case, it can be directly determined whether a user has permission to access the masked video data. According to the implementation manner, the first masked video data is the masked video data, and the first masked area is the masked area (that is, the whole area of the masked area is included). Specifically, area-based permission control may also be performed for the masked video data. Respective permission is set for different areas, that is, video data that corresponds to different areas may correspond to different permission. For example, the masked area includes three areas, area 1 and area 2 correspond to permission A, and area 3 corresponds to permission B. For another example, the masked area includes three areas, area 1 corresponds to permission A, area 2 corresponds to permission B, and area 3 corresponds to permission C. In this case, it needs to determine whether the user has permission to access masked video data that corresponds to a specific area.
  • Specifically, the permission may be determined according to a password. For example, if a password that is received from the first monitoring terminal and used to acquire the first masked video data is determined to be correct (that is, a user inputs a correct password), it is determined that the user has the permission to acquire the first masked video data.
  • Specifically, the permission may be further determined according to a user identifier of a user of the first monitoring terminal. For example, an authorized user identifier may be preconfigured, and if the user identifier matches the authorized user identifier, it is determined that the user has the permission to acquire the first masked video data; an authorized account type may also be preconfigured, and if an account type corresponding to the user identifier matches the authorized account type, it is determined that the user has the permission to acquire the first masked video data. Optionally, the user identifier may be acquired during login of the user performed by using the monitoring terminal. In addition, the video request received in step 210 may carry the user identifier, and in this case, the user identifier carried in the video request may be acquired.
  • If a determined result is yes, perform step 230A. If the determined result is no, perform step 230B.
  • Step 230A: Acquire the first masked video data and the non-masked video data; and send the first masked video data and the non-masked video data to the first monitoring terminal, so that the first monitoring terminal merges and plays the first masked video data and the non-masked video data, or merge the first masked video data and the non-masked video data and send the merged video data to the first monitoring terminal.
  • Preferably, a data type of the masked video data may also be sent to the first monitoring terminal, so that the first monitoring terminal identifies the masked video data from the received video data. Specifically, the data type may specifically be included in an acquiring address (for example, a URL) that is sent to the first monitoring terminal and used to acquire the masked video data, or the data type may be included in a message that is sent to the first monitoring terminal and used to carry the acquiring address, or the data type may be sent in a process of establishing a media channel between the first monitoring terminal and a monitoring platform, where the media channel is used to transmit the masked video data.
  • Preferably, before the sending the first masked video data, the method may further include: sending description information of the first masked area to the first monitoring terminal, so that the first monitoring terminal merges and plays, according to the description information of the first masked area, the first masked video data and the non-masked video data that are received in step 230A. Specifically, the description information may be included in the acquiring address (for example, a URL) that is sent to the first monitoring terminal and used to acquire the masked video data, or the description information may be included in the message that is sent to the first monitoring terminal and used to carry the acquiring address, or the description information may be sent in the process of establishing the media channel used to transmit the masked video data.
  • Step 230B: Acquire the non-masked video data and send it to the first monitoring terminal. Optionally, specific implementations of step 230A and step 230B are as follows:
    • In step 230A, the acquiring the first masked video data and the non-masked video data; and sending the first masked video data and the non-masked video data to the first monitoring terminal may specifically include: generating an acquiring address (for example, a URL, acquiring address 1 for short below) of the first masked video data and an acquiring address (for example, a URL, acquiring address 2 for short below) of the non-masked video data and sending the acquiring addresses to the first monitoring terminal, receiving a request that is sent by the first monitoring terminal and includes the acquiring address 2, establishing, with the first monitoring terminal according to the acquiring address 2, a media channel used to send the non-masked video data, acquiring the non-masked video data according to the acquiring address 2, and sending the non-masked video data through the media channel; meanwhile, receiving a request that is sent by the first monitoring terminal and includes the acquiring address 1, establishing, with the first monitoring terminal according to the acquiring address 1, a media channel used to send the first masked video data, acquiring the first masked video data according to the acquiring address of the first masked video data, and sending the first masked video data through the media channel. Specifically, if the first masked video data includes multiple channels of video data, the acquiring address of the first masked video data includes acquiring addresses of the multiple channels of video data, and media channels established subsequently include multiple media channels used to transmit the multiple channels of video data.
  • Preferably, the acquiring address (for example, a URL) sent to the first monitoring terminal carries a data type. The data type is used to indicate that the video data that can be acquired according to the acquiring address is the non-masked video data or the masked video data. Examples of a format of a URL (universal resource locator, Universal Resource Locator) that carries the data type are as follows:
    • an example of a URL of the non-masked video data: rtsp://192.7.90.55:554/ipc00001?type=non-masked; and
    • an example of a URL of the masked video data: rtsp://192.7.90.55:554/ipc00001?type=masked;
    • where rtsp refers to the Real-Time Streaming Protocol.
  • Preferably, description information (for example, a coordinate of a masked area) of the masked area corresponding to the masked video data may be further carried in the acquiring address of the masked video data. Examples of a format of a URL that carries the data type and the description information of the masked area are as follows:
    • example 1 of the URL of the masked video data: rtsp://192.7.90.55:554/ipc00001?type=masked&masked coordinate 1 (x1, y1, w1, h1)&masked coordinate 2 (x2, y2, w2, h2), where in this example, the masked video data corresponds to two masked areas; and
    • example 2 of the URL of the masked video data: rtsp://192.7.90.55:554/ipc00001?type=masked&masked coordinate 1 (x1, y1, w1, h1), where in this example, the masked video data corresponds to one masked area.
  • Optionally, the monitoring platform may further send the data type and/or the description information of the masked area to the first monitoring terminal by message exchange. For example, when a URL is returned to the first monitoring terminal, the data type and/or the description information of the masked area is included in a message body of an XML structure in a message that carries the URL, as shown in the following:
           <url> rtsp://192.7.90.55:554/ipc00001?type=masked</url>
           <coordinate>
           <value>(x1,y1,w1,h1)</value>
           <value>(x2,y2,w2,h2)</value>
           <coordinate>
  • In addition, a user-defined structure body in an RTSP ANNOUNCE message may also be used to carry the data type and/or the description information of the masked area in the process of establishing the media channel between the first monitoring terminal and the monitoring platform. An example is shown as follows:
  •            S->C: ANNOUNCE rtsp://192.7.90.55:554/ipc00001 RTSP/1.0
               CSeq: 312
               Date: 23 Jan 1997 15:35:06 GMT
               Session: 47112344
               urltype: KeepOutUrl // indicating that the media stream belongs to the masked video
     data
               urlcoordinate:x1=100,y1=100,w1=200,h1=200;x2=100,y2=100,w2=200,h2=200.
  • In step 230A, the acquiring the first masked video data and the non-masked video data, merging the first masked video data and the non-masked video data, and sending the merged video data to the first monitoring terminal specifically includes: generating an acquiring address (for example, a URL) used to acquire the merged video data and sending it to the first monitoring terminal, receiving a request that is sent by the first monitoring terminal and includes the acquiring address, establishing, with the first monitoring terminal according to the acquiring address, a media channel used to send the merged video data, acquiring and merging the first masked video data and the non-masked video data, and sending the merged video data to the first monitoring terminal through the media channel.
  • Step 230B may include: generating an acquiring address of the non-masked video data and sending it to the first monitoring terminal, receiving a request that is sent by the first monitoring terminal and includes the acquiring address, establishing, with the first monitoring terminal according to the acquiring address, a media channel used to send the non-masked video data, acquiring the non-masked video data according to the acquiring address of the non-masked video data and sending the non-masked video data through the media channel.
  • The following describes an optional implementation manner of the first embodiment of the present invention with reference to FIG. 3.
  • A CU (Client Unit, client unit) in this implementation manner is client software installed on a monitoring terminal and provides monitoring personnel with functions such as real-time video surveillance, video query and playback, and a cloud mirror operation.
  • A monitoring platform includes an SCU (Service Control Unit, service control unit) and an MU (Media Unit, media unit). In a practical application, the SCU and the MU may be implemented in a same universal server or dedicated server, or may be separately implemented in different universal servers or dedicated servers.
  • Step 301: A CU sends a video request to an SCU of a monitoring platform, where the video request includes a device identifier and is used to request video data of a peripheral unit identified by the device identifier, and the video data includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • Step 302: The SCU determines whether a user of the CU has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • Specific implementation of step 302 is the same as that of step 220, and therefore no further details are provided herein.
  • If a determined result is yes, steps 303A-312A are performed. In this implementation manner, it is assumed that the first masked video data includes one channel of video data.
  • If the determined result is no, steps 303B-308B are performed.
  • Steps 303A-306A: The SCU requests a URL of the first masked video data and a URL of the non-masked video data from an MU, and the MU generates the URL of the first masked video data and the URL of the non-masked video data and returns them to the SCU.
  • Step 307A: The SCU returns the URL of the first masked video data and the URL of the non-masked video data to the CU.
  • Steps 308A-309A: The CU requests the first masked video data from the MU according to the URL of the first masked video data, establishes, with the MU, a media channel used to transmit the first masked video data, and receives, through the media channel, the first masked video data sent by the MU.
  • Steps 310A-311A: The CU requests the non-masked video data from the MU according to the URL of the non-masked video data, establishes, with the MU, a media channel used to transmit the non-masked video data, and receives, through the media channel, the non-masked video data sent by the MU.
  • Step 312A: The CU merges and plays the first masked video data and the non-masked video data. Steps 303B-304B: The SCU requests a URL of the non-masked video data from the MU, and the MU generates the URL of the non-masked video data and returns it to the SCU.
  • Step 305B: The SCU returns the URL of the non-masked video data to the CU.
  • Steps 306B-307B: The CU requests the non-masked video data from the MU according to the URL of the non-masked video data, establishes, with the MU, a media channel used to transmit the non-masked video data, and receives, through the media channel, the non-masked video data sent by the MU.
  • Step 308B: The CU plays the non-masked video data.
  • According to the first embodiment of the present invention, after receiving a video request of a monitoring terminal, a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art. In addition, according to the first embodiment of the present invention, area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • It should be noted that the first embodiment of the present invention not only can be used in a real-time video surveillance scenario, but also can be used in a video view scenario (for example, video playback and video downloading). If the first embodiment is used in the video view scenario, the acquiring non-masked video data in steps 230A and 230B is specifically reading the non-masked video data from a non-masked video file, and the acquiring masked video data in step 230A is specifically reading masked video data from a masked video file.
  • Correspondingly, before step 210, the following operations are performed:
    • Store the masked video data into the masked video file, store the non-masked video data into the non-masked video file, and establish an association between the masked video file and the non-masked video file, where the masked video file includes one or more video files.
  • The establishing an association between the masked video file and the non-masked video file specifically includes: recording a non-masked video index and a masked video index, and establishing an association between the non-masked video index and the masked video index, where the non-masked video index includes a device identifier of the peripheral unit, video start time and end time, indication information of the non-masked video data, and an identifier of the non-masked video file (for example, a storage address of the non-masked video file, which may specifically be an absolute path of the non-masked video file), and the indication information of the non-masked video data is used to indicate that the non-masked video index is an index of the non-masked video file; and the masked video index includes indication information of the masked video data and an identifier of the masked video file (for example, a storage address of the masked video file, which may specifically be an absolute path of the masked video file), and the indication information of the masked video data is used to indicate that the masked video index is an index of the masked video file. Preferably, both the non-masked video index and the masked video index may include indication information of a non-independent index, where the indication information of the non-independent index is used to indicate an index associated with the index. For example, the indication information of the non-independent index of the non-masked video index is used to indicate a masked video index associated with the non-masked video index. The non-masked video index and/or the masked video index may further include description information of a masked area, or information (for example, a storage address of the description information of the masked area) used to acquire the description information of the masked area. The establishing an association between the non-masked video index and the masked video index may specifically include recording an identifier (for example, an index number) of the masked video index into the non-masked video index, or may further include recording an identifier (for example, an index number) of the non-masked video index into the masked video index, or may further include recording an association between the identifier of the masked video index and the identifier of the non-masked video index. It should be noted that if the masked video data includes multiple channels of video data, a masked video index may be established for each channel of video data, and an association is established between the non-masked video index and each masked video index.
  • Preferably, description information of a masked area corresponding to the video file, or information used to acquire the description information of the masked area corresponding to the video file is recorded in each masked video index.
  • Examples of the non-masked video index and the masked video index are shown in Table 1.
    Figure imgb0001
  • Correspondingly, if the first embodiment is used in the video view scenario, the video request sent in step 210 may further include view time. In steps 230A and 230B, the acquiring the non-masked video data is specifically acquiring video data corresponding to the view time from the non-masked video file, and may specifically include: acquiring the non-masked video index according to the identifier of the peripheral unit, the view time, and the indication information of the non-masked video data, acquiring the non-masked video file according to the identifier of the non-masked video file in the non-masked video index, and acquiring the non-masked video data corresponding to the view time from the non-masked video file. In step 230A, the acquiring the masked video data is specifically acquiring, according to the association between the masked video file and the non-masked video file, one or more video files that are associated with the non-masked video file and correspond to the first masked area and acquiring the video data corresponding to the view time from the one or more video files corresponding to the first masked area, and specifically includes: acquiring, according to the association between the non-masked video index and the masked video index (for example, according to the identifier of the masked video index in the non-masked video index), the masked video index associated with the non-masked video index, acquiring, according to the identifier of the masked video file included in the masked video index, one or more video files corresponding to the first masked area, and acquiring the video data corresponding to the view time from the one or more video files corresponding to the first masked area. Preferably, before the acquiring, according to the association, the masked video index associated with the non-masked video index, the masked video index associated with the non-masked video index may be further determined according to the indication information of the non-independent index in the non-masked video index, so as to improve efficiency of the monitoring platform in retrieving the masked video index.
  • Specifically, after the non-masked video index is acquired, an acquiring address used to acquire the non-masked video data may be generated according to the non-masked video index and sent to a first monitoring terminal, a request that is sent by the first monitoring terminal and includes the acquiring address of the non-masked video data is received, a media channel used to send the non-masked video data is established with the first monitoring terminal according to the acquiring address of the non-masked video data, the non-masked video data is acquired according to the acquiring address of the non-masked video data, and the non-masked video data is sent through the media channel. For example, as shown in FIG. 3, the SCU of the monitoring platform acquires the non-masked video index after receiving the video request, requests, from the MU according to the non-masked video index, a URL used to acquire the non-masked video data corresponding to the non-masked video index, and sends the URL to the CU. The MU receives the request that is sent by the CU and includes the URL, establishes, with the CU according to the URL, a media channel used to send the non-masked video data, reads the non-masked video data in the video file according to the URL, and sends the non-masked video data to the CU through the media channel. A process of sending the masked video data after the masked video index is acquired is similar to a process of sending the non-masked video data after the non-masked video index is acquired, and therefore no further details are provided herein.
  • Preferably, before the sending the first masked video data, the method may further include: sending description information of the first masked area to the first monitoring terminal, so that the first monitoring terminal merges and plays, according to the description information of the first masked area, the first masked video data and the non-masked video data that are received in step 230A. The method may specifically include: acquiring the non-masked video index or description information of a masked area that is included in a masked video index corresponding to the first masked video data, or acquiring the description information of the first masked area according to the non-masked video index or information that is included in the masked video index and used to acquire the description information of the first masked area, and sending the acquired description information of the first masked area to the first monitoring terminal. Specifically, the description information of the first masked area may be carried in a message that is sent to the first monitoring terminal and carries an acquiring address of the first masked video data.
  • Preferably, before step 230A, the method further includes receiving a masked area setting request sent by a second monitoring terminal, where the masked area setting request includes a device identifier of the peripheral unit and the description information of the masked area. After the masked area setting request is received, the description information of the masked area may be sent to the peripheral unit, and the non-masked video data and the masked video data that are sent by the peripheral unit and generated according to the description information of the masked area are received; or the masked video data and the non-masked video data may be obtained by separating, according to the description information of the masked area, complete video data received from the peripheral unit. In addition, as described in step 230A, the masked video data and the non-masked video data may be sent to the first monitoring terminal and be merged and played by the first monitoring terminal, or the masked video data and the non-masked video data may be merged and then sent to the first monitoring terminal.
  • It should be noted that the first monitoring terminal and the second monitoring terminal may be a same monitoring terminal.
  • In conclusion, an entity generating the non-masked video data and the masked video data may be a peripheral unit or a monitoring platform, and an entity merging the non-masked video data and the masked video data may be a monitoring platform or a monitoring terminal (that is, the first monitoring terminal in the first embodiment of the present invention). According to different entities performing a generation operation and a merging operation, the following separately describes three exemplary implementation manners of the first embodiment of the present invention.
  • A first exemplary implementation manner is as follows: As shown in FIG. 4, the peripheral unit generates the non-masked video data and the masked video data, the monitoring platform separately sends the monitoring terminal (for example, the first monitoring terminal in this embodiment) the non-masked video data and the masked video data (for example, the first masked video data in this embodiment) that a user has permission to acquire, and the monitoring terminal merges and plays the received video data.
  • The following introduces an exchange flowchart of the first exemplary implementation manner according to the first embodiment of the present invention with reference to FIG. 5.
  • Step 401: A second monitoring terminal sends a masked area setting request to a monitoring platform, where the masked area setting request includes a device identifier and description information of a masked area.
  • The masked area may specifically include one or more areas, where the area may be rectangular, circular, polygonal, and the like. Preferably, the description information of the masked area specifically includes a coordinate of the masked area. For example, when the masked area includes a rectangle, the description information of the masked area may include coordinates of at least three vertexes of the rectangle, or may only include a coordinate of one vertex of the rectangle and a width and a height of the rectangle, for example (x, y, w, h), where x is the horizontal coordinate of the upper left vertex, y is the vertical coordinate of the upper left vertex, w is the width, and h is the height.
  • Step 402: The monitoring platform sends the masked area setting request to a peripheral unit identified by the device identifier, where the masked area setting request includes the description information of the masked area.
  • Step 403: The peripheral unit encodes a captured video picture to generate masked video data and non-masked video data.
  • Specifically, the peripheral unit encodes the captured video picture into the non-masked video data corresponding to a non-masked area and the masked video data corresponding to the masked area. If the masked area includes one area, a video picture corresponding to the masked area may be encoded into one channel of video data, that is, the masked video data includes one channel of video data.
  • If the masked area includes multiple areas, video pictures corresponding to the multiple areas included in the masked area may be encoded into one channel of video data, that is, the masked video data includes one channel of video data; or video pictures corresponding to the multiple areas included in the masked area may be encoded into one channel of video data each, that is, the masked video data includes multiple channels of video data and each area corresponds to one channel of video data; or video pictures corresponding to areas with same permission among the multiple areas included in the masked area may be encoded into one channel of video data, that is, the areas corresponding to the same permission correspond to a same channel of video data, for example, if the masked area includes three areas, area 1 and area 2 correspond to same permission, and area 3 corresponds to another permission, video pictures corresponding to area 1 and area 2 are encoded into a same channel of video data, and a video picture corresponding to area 3 is encoded into another channel of video data.
  • Specifically, the video picture corresponding to the masked area may be directly encoded into the masked video data, that is, a video data frame of the masked video data includes only pixel data of the video picture corresponding to the masked area; or a video picture in the whole captured video picture may be encoded after filling the video picture by using a set pixel value so as to generate the masked video data, where the video picture corresponds to the non-masked area, that is, a video data frame of the non-masked video data includes both pixel data of the video picture corresponding to the masked area and filled pixel data.
  • Encoding formats include but are not limited to H.264, MPEG4, and MJPEG, and the like.
  • Specifically, the video picture corresponding to the non-masked area may be directly encoded into the non-masked video data, or the video picture in the whole captured video picture may be encoded after filling the video picture by using a set pixel value so as to generate the non-masked video data, where the video picture corresponds to the masked area, and the set pixel value is preferably RGB (0, 0, 0).
  • During the encoding, timestamps of video data frames corresponding to a same complete video picture are kept completely consistent in the masked video data and the non-masked video data.
  • It should be noted that in the first exemplary implementation manner, the description information of the masked area is sent by the monitoring platform to the peripheral unit. Optionally, the description information of the masked area may be preset on the peripheral unit.
  • Step 404: Send the generated masked video data and non-masked video data to the monitoring platform.
  • Preferably, the peripheral unit may further send a data type of the masked video data to the monitoring platform, so that the monitoring platform identifies the masked video data from received video data. The data type may be specifically included in an acquiring address (for example, a URL) that is sent to the monitoring platform and used to acquire the masked video data (where the monitoring platform may acquire the masked video data from the peripheral unit by using the acquiring address), or the data type may be included in a message that is sent to the monitoring platform and used to carry the acquiring address, or the data type may be sent in a process of establishing a media channel between the monitoring platform and the peripheral unit and used to transmit the masked video data.
  • Step 405: A first monitoring terminal sends a video request to the monitoring platform, where the video request includes the device identifier of the peripheral unit.
  • Step 406: Determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • If a determined result is yes, steps 407A-409A are performed.
  • If the determined result is no, steps 407B-408B are performed.
  • Step 407A: The monitoring platform sends description information of the first masked area to the first monitoring terminal.
  • Step 408A: The monitoring platform sends the first masked video data and the non-masked video data to the first monitoring terminal.
  • Step 409A: The first monitoring terminal merges and plays the received first masked video data and non-masked video data.
  • Preferably, the received first masked video data and non-masked video data are merged and played according to the description information of the first masked area.
  • Specifically, if the first masked video data includes one channel of video data, the first masked video data is decoded to obtain a masked video data frame, the non-masked video data is decoded to obtain a non-masked video data frame, pixel data in the masked video data frame is extracted, the extracted pixel data is added, according to the description information of the first masked area, to a pixel area in a non-masked video data frame that has a same timestamp as the masked video data frame so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and the complete video data frame is played. If a video picture corresponding to the first masked area is directly encoded into the first masked video data during encoding, that is, the masked video data frame includes only pixel data of a video picture corresponding to the masked area, the extracting the pixel data in the masked video data frame is specifically extracting all pixel data in the masked video data frame. If a video picture in the whole captured video picture is encoded after filling the video picture by using a set pixel value so as to generate the masked video data during the encoding, where the video picture corresponds to the non-masked area, that is, a video data frame of the non-masked video data includes both the pixel data of the video picture corresponding to the masked area and the filled pixel data, pixel data of the pixel area in the masked video data frame is extracted according to the description information of the first masked area, where the pixel area corresponds to the first masked area.
  • If the first masked video data includes multiple channels of video data, each channel of video data in the first masked video data is decoded to obtain a masked video data frame of the channel of video data, the non-masked video data is decoded to obtain a non-masked video data frame, pixel data in masked video data frames of all channels of video data is extracted, where the masked video data frames have a same timestamp, the extracted pixel data is added to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and the complete video data frame is played.
  • Specifically, as shown in FIG. 6, both the non-masked video data and the first masked video data are transmitted to the first monitoring terminal through the RTP protocol. The first monitoring terminal receives a non-masked video data code stream and a first masked video data code stream that are encapsulated through the RTP protocol, parses the non-masked video data code stream and the first masked video data code stream to obtain the non-masked video data and the first masked video data respectively, and separately caches the non-masked video data and the first masked video data in a decoder buffer area. Frame data is synchronized according to a synchronization timestamp, that is, frame data that has a same timestamp is separately extracted from the non-masked video data and the first masked video data. The extracted frame data of the non-masked video data and the extracted frame data of the first masked video data that have the same timestamp are separately encoded to generate corresponding YUV data. Then, YUV data of the first masked video data and YUV data of the non-masked video data are merged according to the description information of the first masked area, and the merged YUV data is rendered and played.
  • It should be noted that if the masked video data and the non-masked video data that are sent by the peripheral unit are not received before step 404, a request for acquiring video data is sent to the peripheral unit after step 404. Specifically, video data that a user of the first monitoring terminal has permission to acquire may be requested from the peripheral unit according to the determined result in step 406. For example, if the user only has permission to acquire the non-masked video data, only the non-masked video data is requested; and if the user has permission to acquire the non-masked video data and the first masked video data, the non-masked video data and the first masked video data is requested. After receiving the request, the peripheral unit generates the requested video data and returns it to the monitoring platform. A method used by the peripheral unit to generate the non-masked video data and the first masked video data is the same as that in step 403, and therefore no further details are provided.
  • Step 407B: The monitoring platform forwards the non-masked video data to the first monitoring terminal.
  • Step 408B: The first monitoring terminal plays the received non-masked video data.
  • A second exemplary implementation manner 2 is as follows: As shown in FIG. 7, the peripheral unit generates the non-masked video data and the masked video data, and the monitoring platform merges the non-masked video data and the masked video data (that is, the first masked video data) that a user has permission to acquire and then sends them to the monitoring terminal.
  • The following introduces an exchange flowchart of the second exemplary implementation manner according to the first embodiment of the present invention with reference to FIG. 8.
  • Steps 501-506 are the same as steps 401-406, and therefore no further details are provided.
  • If a determined result in step 506 is yes, steps 507A-510A are performed.
  • If the determined result in step 506 is no, steps 507B-508B are performed.
  • Step 507A is the same as step 407A.
  • Step 508A: The monitoring platform merges the non-masked video data and the first masked video data.
  • Preferably, the first masked video data and the non-masked video data are merged according to the description information of the masked area received in step 501.
  • Specifically, if the first masked video data includes one channel of video data, the first masked video data is decoded to obtain a masked video data frame, the non-masked video data is decoded to obtain a non-masked video data frame, pixel data in the masked video data frame is extracted, the extracted pixel data is added to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the masked area, and the complete video data frame is encoded to obtain the merged video data. If a video picture corresponding to the masked area is directly encoded into the first masked video data during encoding, that is, the masked video data frame includes only pixel data of the video picture corresponding to the masked area, the extracting the pixel data in the masked video data frame is specifically extracting all pixel data in the masked video data frame. If a video picture in the whole captured video picture is encoded after filling the video picture by using a set pixel value so as to generate the first masked video data during the encoding, where the video picture corresponds to the non-masked area, that is, a video data frame of the non-masked video data includes both the pixel data of the video picture corresponding to the masked area and the filled pixel data, pixel data of a pixel area in the masked video data frame is extracted, where the pixel area corresponds to the first masked area.
  • If the first masked video data includes multiple channels of video data, each channel of video data in the first masked video data is decoded to obtain a masked video data frame of the channel of video data, the non-masked video data is decoded to obtain a non-masked video data frame, pixel data in masked video data frames of all channels of video data is extracted, where the masked video data frames have a same timestamp, the extracted pixel data is added to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the masked area, and the complete video data frame is encoded to obtain the merged video data.
  • Specifically, as shown in FIG. 9, both the non-masked video data and the first masked video data are transmitted to the monitoring platform through the RTP protocol. Processing after the monitoring platform receives a non-masked video data code stream and a first masked video data code stream that are encapsulated through the RTP protocol is similar to the processing after the first monitoring terminal receives a code stream in step 409A. A difference lies only in that the first monitoring terminal renders and plays YUV data after merging the YUV data, while the monitoring platform encodes merged YUV data after merging the YUV data, so as to generate the merged video data.
  • Step 509A: Send the merged video data to the first monitoring terminal.
  • Step 510A: The first monitoring terminal directly decodes and plays the merged video data.
  • Steps 507B-508B are the same as steps 407B-408B.
  • A third exemplary implementation manner is as follows: As shown in FIG. 10, the peripheral unit generates complete video data, the monitoring platform obtains the masked video data and the non-masked video data by separating the complete video data received from the peripheral unit, and separately sends the monitoring terminal the non-masked video data and the masked video data that a user has permission to acquire, and the monitoring terminal merges and plays the received masked video data and non-masked video data.
  • The following introduces an exchange flowchart of the third exemplary implementation manner according to the first embodiment of the present invention with reference to FIG. 11.
  • Step 601 is the same as step 401, and therefore no further details are provided.
  • Step 602: The peripheral unit encodes a captured video picture into complete video data and sends the complete video data to the monitoring platform.
  • Step 603: The monitoring platform obtains the masked video data corresponding to a masked area and the non-masked video data corresponding to a non-masked area by separating, according to the description information of a masked area received in step 601, the complete video data.
  • If the masked area includes one area, a video picture in the complete video data may be encoded into one channel of video data, that is, the masked video data includes one channel of video data, where the video picture corresponds to the masked area.
  • If the masked area includes multiple areas, video pictures in the complete video data that correspond to the multiple areas included in the masked area may be encoded into one channel of video data, that is, the masked video data includes one channel of video data; or video pictures in the complete video data that correspond to the multiple areas included in the masked area may be encoded into one channel of video data each, that is, the masked video data includes multiple channels of video data and each area corresponds to one channel of video data; or video pictures corresponding to areas with same permission among the multiple areas included in the masked area may be encoded into one channel of video data, that is, the areas corresponding to the same permission correspond to a same channel of video data, for example, if the masked area includes three areas, area 1 and area 2 correspond to same permission, and area 3 corresponds to another permission, video pictures corresponding to area 1 and area 2 are encoded into a same channel of video data, and a video picture corresponding to area 3 is encoded into another channel of video data.
  • Specifically, the video picture corresponding to the masked area may be directly encoded into the masked video data. This includes: decoding the complete video data to obtain a complete video data frame and extracting pixel data of the video picture in the complete video data frame to generate a video data frame of the masked video data, where the video picture corresponds to the masked area. A video picture in the whole captured video picture may also be encoded after filling the video picture by using a set pixel value so as to generate the masked video data, where the video picture corresponds to the non-masked area. This includes: decoding the complete video data to obtain a complete video data frame and setting a pixel value of a pixel of a pixel area in the complete video data frame as a set pixel value, where the pixel area corresponds to the non-masked area, that is, the video data frame of the non-masked video data includes both the pixel data of the video picture corresponding to the masked area and the filled pixel data.
  • The obtaining the non-masked video data corresponding to a non-masked area may specifically be directly encoding the video picture corresponding to the non-masked area into the non-masked video data, which includes decoding the complete video data to obtain a complete video data frame and extracting pixel data of the video picture in the complete video data frame to generate the video data frame of the non-masked video data, where the video picture corresponds to the non-masked area; or may specifically be encoding the video picture in the whole video picture after filing the video picture by using a set pixel value so as to generate the non-masked video data, where the video picture corresponds to the masked area, which includes: decoding the complete video data to obtain a complete video data frame and setting a pixel value of a pixel of a pixel area in the complete video data frame as the set pixel value, where the pixel area corresponds to the masked area, and the set pixel value is preferably RGB (0, 0, 0).
  • During the encoding, timestamps of video data frames corresponding to a same complete video picture are kept completely consistent in the masked video data and the non-masked video data. Encoding formats include but are not limited to H.264, MPEG4, and MJPEG.
  • Steps 604-605 are the same as steps 405-406.
  • If a determined result in step 605 is yes, steps 606A-608A are performed.
  • If the determined result in step 605 is no, steps 606B-607B are performed.
  • Steps 606A-608A are the same as steps 407A-409A.
  • Steps 606B-607B are the same as steps 407B-408B.
  • For brevity, the foregoing method embodiments are represented as a series of actions. However, a person skilled in the art should understand that the present invention is not limited to the order of the described actions, because according to the present invention, some steps may adopt other orders or occur simultaneously. It should be further understood by a person skilled in the art that the described embodiments all belong to exemplary embodiments, and the involved actions and modules are not necessarily required by the present invention.
  • According to the first embodiment of the present invention, a second embodiment of the present invention provides a monitoring platform 500.
  • As shown in FIG. 12, the monitoring platform includes a video request receiving unit 501, a determining unit 502, an acquiring unit 503, and a video data sending unit 504.
  • The video request receiving unit 501 is configured to receive a video request sent by a first monitoring terminal, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • The determining unit 502 is configured to determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • The acquiring unit 503 is configured to acquire the non-masked video data and configured to acquire the first masked video data when a determined result of the determining unit 502 is yes.
  • The video data sending unit 504 is configured to: when the determined result of the determining unit 502 is yes, send the first monitoring terminal the first masked video data and the non-masked video data that are acquired by the acquiring unit 503, so that the first monitoring terminal merges and plays the first masked video data and the non-masked video data, or merge the first masked video data and the non-masked video data that are acquired by the acquiring unit 503 to obtain merged video data, and send the merged video data to the first monitoring terminal; and further configured to: when the determined result of the determining unit 502 is no, send the first monitoring terminal the non-masked video data acquired by the acquiring unit 503.
  • Optionally, the monitoring platform further includes a setting request receiving unit 505.
  • The setting request receiving unit 505 is configured to receive a masked area setting request sent by a second monitoring terminal, where the masked area setting request includes the device identifier of the peripheral unit and description information of the masked area.
  • As shown in FIG. 13, the monitoring platform further includes a description information sending unit 506 and a first video data receiving unit 507. The description information sending unit 506 is configured to send the description information of the masked area to the peripheral unit; and the first video data receiving unit 507 is configured to receive the non-masked video data and the masked video data that are sent by the peripheral unit and generated according to the description information of the masked area.
  • As shown in FIG. 14, the monitoring platform further includes a second video data receiving unit 508 and a video data separating unit 509. The second video data receiving unit 508 is configured to receive complete video data sent by the peripheral unit; and the video data separating unit 509 is configured to obtain the masked video data and the non-masked video data by separating the complete video data received by the second video data receiving unit.
  • Preferably, the monitoring platform further includes a storing unit and an association establishing unit.
  • The storing unit is configured to store the masked video data into a masked video file and store the non-masked video data into a non-masked video file, and the masked video file includes one or more video files.
  • The association establishing unit is configured to establish an association between the masked video file and the non-masked video file.
  • The video request receiving unit 501 is specifically configured to receive a video request that includes view time and is sent by the first monitoring terminal.
  • The acquiring unit 503 is specifically configured to acquire video data corresponding to the view time from the non-masked video file, and further specifically configured to acquire, according to the association established by the association establishing unit, one or more video files that correspond to the first masked area and are associated with the non-masked video file and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area when the determined result of the determining unit 502 is yes.
  • Further, the association establishing unit is specifically configured to record a non-masked video index and a masked video index and establish an association between the non-masked video index and the masked video index, where the non-masked video index includes the device identifier of the peripheral unit, video start time and end time, indication information of the non-masked video data, and an identifier of the non-masked video file, and the masked video index includes indication information of the masked video data and an identifier of the masked video file.
  • Correspondingly, the acquiring unit 503 is specifically configured to obtain, through matching, the non-masked video index according to the device identifier of the peripheral unit and the view time that are included in the video request and the indication information of the non-masked video data, the device identifier of the peripheral unit, and the video start time and end time that are included in the non-masked video index, acquire the non-masked video file according to the identifier of the non-masked video file included in the non-masked video index, and acquire the video data corresponding to the view time from the non-masked video file; and further specifically configured to acquire, when the determined result of the determining unit 502 is yes, the masked video index associated with the non-masked video index according to the association, acquire, according to the identifier of the masked video file included in the masked video index, one or more video files corresponding to the first masked area, and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area.
  • A functional unit described in the second embodiment of the present invention can be used to implement the method described in the first embodiment.
  • Preferably, the video request receiving unit 501, the determining unit 502, the setting request receiving unit 505, and the description information sending unit 506 are located on an SCU of the monitoring platform, and the acquiring unit 503, the video data sending unit 504, the first video data receiving unit 507, the second video data receiving unit 508, and the video data separating unit 509 are located on an MU of the monitoring platform.
  • According to the second embodiment of the present invention, after receiving a video request of a monitoring terminal, a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art. In addition, according to the second embodiment of the present invention, area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • According to the first embodiment of the present invention, a third embodiment of the present invention provides a monitoring terminal 600.
  • As shown in FIG. 15, the monitoring terminal includes a video request sending unit 601, a video data receiving unit 602, and a playing unit 603.
  • The video request sending unit 601 is configured to send a video request to a monitoring platform, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • The video data receiving unit 602 is configured to receive first masked video data and the non-masked video data that are sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area; and further configured to receive the non-masked video data that is sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data in the masked video data.
  • The playing unit is configured to merge and play the first masked video data and the non-masked video data that are received by the video data receiving unit 602, or configured to play the non-masked video data received by the video data receiving unit 602.
  • Preferably, if the first masked video data includes one channel of video data, the playing unit is specifically configured to decode the first masked video data to obtain a masked video data frame, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in the masked video data frame, add, according to description information of the first masked area, the extracted pixel data to a pixel area in a non-masked video data frame that has a same timestamp as the masked video data frame so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • Optionally, if the first masked video data includes multiple channels of video data, the playing unit is specifically configured to decode each channel of video data in the first masked video data to obtain a masked video data frame of the channel of video data, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in masked video data frames of all channels of video data, where the masked video data frames have a same timestamp, add the extracted pixel data to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • A functional unit described in the third embodiment of the present invention can be used to implement the method described in the first embodiment.
  • According to the first embodiment of the present invention, a fourth embodiment of the present invention provides a peripheral unit 700.
  • As shown in FIG. 16, the peripheral unit includes a description information receiving unit 701, a video data encoding unit 702, and a video data sending unit 703.
  • The description information receiving unit 701 is configured to receive description information of a masked area, where the description information is sent by a monitoring platform.
  • The video data encoding unit 702 is configured to encode, according to the description information of the masked area, a captured video picture into non-masked video data corresponding to a non-masked area and masked video data corresponding to the masked area.
  • The video data sending unit 703 is configured to send the non-masked video data and the masked video data to the monitoring platform, so that the monitoring platform sends the non-masked video data and first masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data, or sends the non-masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • Preferably, the video data encoding unit 702 is specifically configured to: when the masked area includes one area, encode a video picture in the captured video picture into one channel of video data according to the description information of the masked area, where the video picture corresponds to the masked area; or when the masked area includes multiple areas, encode video pictures in the captured video picture into one channel of video data according to the description information of the masked area, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data each, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data, where the video pictures correspond to areas with same permission among the multiple areas included in the masked area; and further specifically configured to encode a video picture in the captured video picture into the non-masked video data according to the description information of the masked area, where the video picture corresponds to the non-masked area.
  • A functional unit described in the fourth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • As shown in FIG. 17, a fifth embodiment of the present invention provides a monitoring platform 1000, including:
    • a processor (processor) 1010, a communications interface (Communications Interface) 1020, a memory (memory) 1030, and a bus 1040.
  • The processor 1010, the communications interface 1020, and the memory 1030 complete communication between each other through the bus 1040.
  • The communications interface 1020 is configured to communicate with a network element, for example, communicate with a monitoring terminal or a peripheral unit.
  • The processor 1010 is configured to execute a program 1032.
  • Specifically, the program 1032 may include a program code, and the program code includes a computer operation instruction.
  • The processor 1010 is configured to perform a computer program stored in the memory and may specifically be a central processing unit (CPU, central processing unit), which is a core unit of a computer.
  • The memory 1030 is configured to store the program 1032. The memory 1030 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • The program 1032 may specifically include a video request receiving unit 1032-1, a determining unit 1032-2, an acquiring unit 1032-3, and a video data sending unit 1032-4.
  • The video request receiving unit 1032-1 is configured to receive a video request sent by a first monitoring terminal, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • The determining unit 1032-2 is configured to determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • The acquiring unit 1032-3 is configured to acquire the non-masked video data and configured to acquire the first masked video data when a determined result of the determining unit 1032-2 is yes.
  • The video data sending unit 1032-4 is configured to: when the determined result of the determining unit 1032-2 is yes, send the first monitoring terminal the first masked video data and the non-masked video data that are acquired by the acquiring unit 1032-3, so that the first monitoring terminal merges and plays the first masked video data and the non-masked video data, or merge the first masked video data and the non-masked video data that are acquired by the acquiring unit 1032-3 to obtain merged video data, and send the merged video data to the first monitoring terminal; and further configured to: when the determined result of the determining unit 1032-2 is no, send the first monitoring terminal the non-masked video data acquired by the acquiring unit 1032-3.
  • Optionally, the program further includes a setting request receiving unit 1032-5.
  • The setting request receiving unit 1032-5 is configured to receive a masked area setting request sent by a second monitoring terminal, where the masked area setting request includes the device identifier of the peripheral unit and description information of the masked area.
  • The monitoring platform further includes a description information sending unit 1032-6 and a first video data receiving unit 1032-7. The description information sending unit 1032-6 is configured to send the description information of the masked area to the peripheral unit; and the first video data receiving unit 1032-7 is configured to receive the non-masked video data and the masked video data that are sent by the peripheral unit and generated according to the description information of the masked area.
  • The monitoring platform further includes a second video data receiving unit 1032-8 and a video data separating unit 1032-9. The second video data receiving unit 1032-8 is configured to receive complete video data sent by the peripheral unit; and the video data separating unit 1032-9 is configured to obtain the masked video data and the non-masked video data by separating the complete video data received by the second video data receiving unit.
  • Preferably, the program further includes a storing unit and an association establishing unit.
  • The storing unit is configured to store the masked video data into a masked video file and store the non-masked video data into a non-masked video file, and the masked video file includes one or more video files.
  • The association establishing unit is configured to establish an association between the masked video file and the non-masked video file.
  • The video request receiving unit 1032-1 is specifically configured to receive a video request that includes view time and is sent by the first monitoring terminal.
  • The acquiring unit 1032-3 is specifically configured to acquire video data corresponding to the view time from the non-masked video file, and further specifically configured to acquire, according to the association established by the association establishing unit, one or more video files that correspond to the first masked area and are associated with the non-masked video file and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area when the determined result of the determining unit 1032-2 is yes.
  • Further, the association establishing unit is specifically configured to record a non-masked video index and a masked video index and establish an association between the non-masked video index and the masked video index, where the non-masked video index includes the device identifier of the peripheral unit, video start time and end time, indication information of the non-masked video data, and an identifier of the non-masked video file, and the masked video index includes indication information of the masked video data and an identifier of the masked video file.
  • Correspondingly, the acquiring unit 1032-3 is specifically configured to obtain, through matching, the non-masked video index according to the device identifier of the peripheral unit and the view time that are included in the video request and the indication information of the non-masked video data, the device identifier of the peripheral unit, and the video start time and end time that are included in the non-masked video index, acquire the non-masked video file according to the identifier of the non-masked video file included in the non-masked video index, and acquire the video data corresponding to the view time from the non-masked video file; and further specifically configured to acquire, when the determined result of the determining unit 1032-2 is yes, the masked video index associated with the non-masked video index according to the association, acquire, according to the identifier of the masked video file included in the masked video index, one or more video files corresponding to the first masked area, and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area.
  • For specific implementation of each unit in the program 1032, refer to a corresponding unit in the second embodiment of the present invention, and therefore no further details are provided herein. A functional unit described in the fifth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • According to the fifth embodiment of the present invention, after receiving a video request of a monitoring terminal, a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art. In addition, according to the fifth embodiment of the present invention, area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • As shown in FIG. 18, a sixth embodiment of the present invention provides a monitoring terminal 2000, including:
    • a processor (processor) 2010, a communications interface (Communications Interface) 2020, a memory (memory) 2030, and a bus 2040.
  • The processor 2010, the communications interface 2020, and the memory 2030 complete communication between each other through the bus 2040.
  • The communications interface 2020 is configured to communicate with a network element, for example, communicate with a monitoring platform.
  • The processor 2010 is configured to execute a program 2032.
  • Specifically, the program 2032 may include a program code, and the program code includes a computer operation instruction.
  • The processor 2010 is configured to perform a computer program stored in the memory and may specifically be a central processing unit (CPU, central processing unit), which is a core unit of a computer.
  • The memory 2030 is configured to store the program 2032. The memory 2030 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • The program 2032 may specifically include a video request sending unit 2032-1, a video data receiving unit 2032-2, and a playing unit 2032-3.
  • The video request sending unit is configured to send a video request to a monitoring platform, the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • The video data receiving unit 2032-2 is configured to receive first masked video data and the non-masked video data that are sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area; and further configured to receive the non-masked video data that is sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data in the masked video data.
  • The playing unit is configured to merge and play the first masked video data and the non-masked video data that are received by the video data receiving unit 2032-2, or configured to play the non-masked video data received by the video data receiving unit 2032-2.
  • Preferably, if the first masked video data includes one channel of video data, the playing unit is specifically configured to decode the first masked video data to obtain a masked video data frame, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in the masked video data frame, add, according to description information of the first masked area, the extracted pixel data to a pixel area in a non-masked video data frame that has a same timestamp as the masked video data frame so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • Optionally, if the first masked video data includes multiple channels of video data, the playing unit is specifically configured to decode each channel of video data in the first masked video data to obtain a masked video data frame of the channel of video data, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in masked video data frames of all channels of video data, where the masked video data frames have a same timestamp, add the extracted pixel data to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • For specific implementation of each unit in the program 2032, refer to a corresponding unit in the third embodiment of the present invention, and therefore no further details are provided herein.
  • A functional unit described in the sixth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • As shown in FIG. 19, a seventh embodiment of the present invention provides a peripheral unit 3000, including:
    • a processor (processor) 3010, a communications interface (Communications Interface) 3020, a memory (memory) 3030, and a bus 3040.
  • The processor 3010, the communications interface 3020, and the memory 3030 complete communication between each other through the bus 3040.
  • The communications interface 3020 is configured to communicate with a network element, for example, communicate with a monitoring platform.
  • The processor 3010 is configured to execute a program 3032.
  • Specifically, the program 3032 may include a program code, and the program code includes a computer operation instruction.
  • The processor 3010 is configured to perform a computer program stored in the memory, and may specifically be a central processing unit (CPU, central processing unit), which is a core unit of a computer.
  • The memory 3030 is configured to store the program 3032. The memory 3030 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • The program 3032 may specifically include a description information receiving unit 3032-1, a video data encoding unit 3032-2, and a video data sending unit 3032-3.
  • The description information receiving unit 3032-1 is configured to receive description information of a masked area, where the description information is sent by a monitoring platform;
  • The video data encoding unit 3032-2 is configured to encode, according to the description information of the masked area, a captured video picture into non-masked video data corresponding to a non-masked area and masked video data corresponding to the masked area.
  • The video data sending unit 3032-3 is configured to send the non-masked video data and the masked video data to the monitoring platform, so that the monitoring platform sends the non-masked video data and first masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data, or sends the non-masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • Preferably, the video data encoding unit 3032-2 is specifically configured to: when the masked area includes one area, encode a video picture in the captured video picture into one channel of video data according to the description information of the masked area, where the video picture corresponds to the masked area; or when the masked area includes multiple areas, encode video pictures in the captured video picture into one channel of video data according to the description information of the masked area, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data each, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data, where the video pictures correspond to areas with same permission among the multiple areas included in the masked area; and further specifically configured to encode a video picture in the captured video picture into the non-masked video data according to the description information of the masked area, where the video picture corresponds to the non-masked area.
  • For specific implementation of each unit in the program 3032, refer to a corresponding unit in the fourth embodiment of the present invention, and therefore no further details are provided herein.
  • A functional unit described in the seventh embodiment of the present invention can be used to implement the method described in the first embodiment.
  • According to the first to the seventh embodiments of the present invention, an eighth embodiment of the present invention provides a video surveillance system 4000.
  • As shown in FIG. 20, the video surveillance system includes a monitoring terminal 4010 and a monitoring platform 4020.
  • The monitoring terminal 4010 is specifically the monitoring terminal according to the third or the sixth embodiment.
  • The monitoring platform 4020 is specifically the monitoring platform according to the second or the fifth embodiment.
  • As shown in FIG. 21, the video surveillance system may further include a peripheral unit 4030, which is specifically the peripheral unit according to the fourth or the seventh embodiment.
  • A functional unit described in the eighth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • According to the eighth embodiment of the present invention, after receiving a video request of a monitoring terminal, a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • In addition, according to the eighth embodiment of the present invention, area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • A person of ordinary skill in the art may be aware that, with reference to the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of the present invention.
  • It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, reference may be made to a corresponding process in the foregoing method embodiments, and therefore no further details are provided herein.
  • In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely exemplary. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
  • The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one position, or may be distributed on a plurality of network units. A part or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.
  • When the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the present invention essentially, or the part contributing to the prior art, or part of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or a part of the steps of the methods described in the embodiment of the present invention. The foregoing storage medium includes: any medium that can store a program code, such as a USB flash disk, a removable hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk, or an optical disk.
  • Claims (13)

    1. A method for implementing video access, wherein a monitoring platform communicates with a first monitoring terminal through a transmission network, the method comprising:
      receiving, by the monitoring platform, a video request sent by the first monitoring terminal, wherein the video request comprises a device identifier, and video data of a peripheral unit identified by the device identifier comprises non-masked area video data corresponding to a non-masked area and masked area video data corresponding to a masked area for which different areas have respective permissions;
      determining, by the monitoring platform, whether a user of the first monitoring terminal has permission to acquire first masked area video data in the masked area video data, wherein the first masked area video data corresponds to a first masked area having a respective permission, and the first masked area comprises a part of the masked area; and
      if a determined result is yes, acquiring the first masked area video data and the non-masked area video data; sending description information of the first masked area to the first monitoring terminal, and then sending the first masked area video data and the non-masked area video data to the first monitoring terminal, wherein the description information of the first masked area which is sent to the first monitoring terminal is used by the first monitoring terminal to merge the first masked area video data and the non-masked area video data, or merging the acquired first masked area video data and non-masked area video data to obtain merged video data according to description information of the first masked area and sending the merged video data to the first monitoring terminal; wherein, the description information of the first masked area is from a masked area setting request that is received from a second monitoring terminal, and comprises a coordinate of the first masked area; and
      if a determined result is no, acquiring the non-masked area video data and sending the non-masked area video data to the first monitoring terminal, wherein:
      before the receiving a video request sent by a first monitoring terminal, the method comprises:
      receiving the masked area setting request sent by a second monitoring terminal, wherein the masked area setting request comprises the device identifier of the peripheral unit and description information of the masked area; and
      sending the description information of the masked area to the peripheral unit, and receiving the non-masked area video data and the masked area video data that are sent by the peripheral unit and generated according to the description information of the masked area; or obtaining, the masked area video data and the non-masked area video data by separating, according to the description information of the masked area, complete video data received from the peripheral unit.
    2. The method according to claim 1, wherein:
      before the acquiring the first masked area video data and the non-masked area video data, the method comprises:
      storing the masked area video data into a masked video file, storing the non-masked area video data into a non-masked video file, and establishing an association between the masked video file and the non-masked video file, wherein the masked video file comprises one or more video files;
      the video request comprises view time;
      the acquiring the non-masked area video data specifically comprises: acquiring video data corresponding to the view time from the non-masked video file; and
      the acquiring the first masked area video data specifically comprises: acquiring, according to the association, one or more video files that correspond to the first masked area and are associated with the non-masked video file, and acquiring video data corresponding to the view time from the one or more video files corresponding to the first masked area.
    3. The method according to claim 2, wherein:
      the establishing an association between the masked video file and the non-masked video file specifically comprises:
      recording a non-masked video index and a masked video index, wherein the non-masked video index comprises the device identifier of the peripheral unit, video start time and end time, indication information of the non-masked area video data, and an identifier of the non-masked video file, and the masked video index comprises indication information of the masked area video data and an identifier of the masked video file; and establishing an association between the non-masked video index and the masked video index;
      the acquiring the non-masked area video data specifically comprises: obtaining, through matching, the non-masked video index according to the device identifier of the peripheral unit and the view time that are comprised in the video request and the indication information of the non-masked area video data, the device identifier of the peripheral unit, and the video start time and end time that are comprised in the non-masked video index, acquiring the non-masked video file according to the identifier of the non-masked video file comprised in the non-masked video index, and acquiring the video data corresponding to the view time from the non-masked video file; and
      the acquiring the first masked area video data specifically comprises: acquiring, according to the association, the masked video index associated with the non-masked video index, acquiring, according to the identifier of the masked video file comprised in the masked video index, one or more video files corresponding to the first masked area, and acquiring the video data corresponding to the view time from the one or more video files corresponding to the first masked area.
    4. The method according to claim 1, wherein:
      the acquiring the first masked area video data and the non-masked area video data; and sending the first masked area video data and the non-masked area video data to the first monitoring terminal specifically comprises:
      generating an acquiring address of the non-masked area video data and an acquiring address of the first masked area video data and sending the acquiring addresses to the first monitoring terminal, wherein the acquiring address of the first masked area video data or a message carrying the acquiring address of the masked area video data comprises a data type that is used to indicate that video data corresponding to the acquiring address is masked area video data;
      receiving a request that is sent by the first monitoring terminal and comprises the acquiring address of the non-masked area video data, establishing, with the first monitoring terminal according to the acquiring address of the non-masked area video data, a media channel used to send the non-masked area video data, acquiring the non-masked area video data according to the acquiring address of the non-masked area video data, and sending the non-masked area video data through the media channel; and
      receiving a request that is sent by the first monitoring terminal and comprises the acquiring address of the first masked area video data, establishing, with the first monitoring terminal according to the acquiring address of the first masked area video data, a media channel used to send the first masked area video data, acquiring the first masked area video data according to the acquiring address of the first masked area video data, and sending the first masked area video data through the media channel.
    5. A method for implementing video access, wherein a peripheral unit communicates with a monitoring platform through a transmission network, the method comprising:
      receiving, by the peripheral unit, description information of a masked area for which different areas have respective permissions, wherein the description information is sent by the monitoring platform, wherein the description information of the masked area comprises a coordinate of the masked area;
      encoding, by the peripheral unit according to the description information of the masked area, a captured video picture into non-masked area video data corresponding to a non-masked area and masked area video data corresponding to the masked area; and
      sending, by the peripheral unit, the non-masked area video data and the masked area video data to the monitoring platform, so that: the monitoring platform sends the non-masked area video data and first masked area video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked area video data, and sends the non-masked area video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked area video data, wherein the first masked area video data corresponds to a first masked area, and having a respective permission, the first masked area comprises a part of the masked area.
    6. The method according to claim 5, wherein:
      the encoding, according to the description information of the masked area, a captured video picture into masked area video data corresponding to the masked area specifically comprises:
      when the masked area comprises one area, encoding a video picture in the captured video picture into one channel of video data, wherein the video picture corresponds to the masked area; or
      when the masked area comprises multiple areas, encoding video pictures in the captured video picture into one channel of video data, wherein the video pictures correspond to the multiple areas comprised in the masked area, or encoding video pictures in the captured video picture into one channel of video data each, wherein the video pictures correspond to the multiple areas comprised in the masked area, or encoding video pictures in the captured video picture into one channel of video data, wherein the video pictures correspond to areas with same permission among the multiple areas comprised in the masked area.
    7. The method according to claim 5, wherein:
      the encoding, according to the description information of the masked area, a captured, video picture into masked area video data corresponding to the masked area specifically comprises: directly encoding a video picture in the captured video picture into the masked area video data, wherein the video picture corresponds to the masked area; or encoding a video picture in the captured video picture after filling the video picture by using a set pixel value so as to generate the masked area video data, wherein the video picture corresponds to the non-masked area; and
      the encoding, according to the description information of the masked area, a captured video picture into non-masked area video data corresponding to the non-masked area specifically comprises: directly encoding a video picture in the captured video picture into the non-masked area video data, wherein the video picture corresponds to the non-masked area; or encoding a video picture in the captured video picture after filling the video picture by using a set pixel value so as to generate the non-masked area video data, wherein the video picture corresponds to the masked area.
    8. A monitoring platform, wherein the monitoring platform communicates with a first monitoring terminal through a transmission network, the monitoring platform comprising: a video request receiving unit, a determining unit, an acquiring unit, and a video data sending unit, wherein:
      the video request receiving unit is configured to receive a video request sent by the first monitoring terminal, wherein the video request comprises a device identifier, and video data of a peripheral unit identified by the device identifier comprises non-masked area video data corresponding to a non-masked area and masked area video data corresponding to a masked area for which different areas have respective permissions;
      the determining unit is configured to determine whether a user of the first monitoring terminal has permission to acquire first masked area video data in the masked area video data, wherein the first masked area video data corresponds to a first masked area having a respective permission, and the first masked area comprises a part of the masked area;
      the acquiring unit is configured to acquire the non-masked area video data and configured to acquire the first masked area video data when a determined result of the determining unit is yes; and
      the video data sending unit is configured to: when the determined result of the determining unit is yes, send the first monitoring terminal the first masked area video data and the non-masked area video data that are acquired by the acquiring unit, and description information of the first masked area, wherein the description information of the first masked area which is sent to the first monitoring terminal is used by the first monitoring terminal to merge and play the first masked area video data and the non-masked area video data, or merge the first masked area video data and the non-masked area video data that are acquired by the acquiring unit to obtain merged video data according to the description information of the first masked area, and send the merged video data to the first monitoring terminal; wherein, the description information of the first masked area is from a masked area setting request that is received from a second monitoring terminal, and comprises a coordinate of the first masked area; and further configured to: when the determined result of the determining unit is no, send the first monitoring terminal the non-masked area video data acquired by the acquiring unit, wherein:
      the monitoring platform further comprises: a setting request receiving unit, a description information sending unit, and a first video data receiving unit; the setting request receiving unit is configured to receive a masked area setting request sent by a second monitoring terminal, wherein the masked area setting request comprises a device identifier of the peripheral unit and description information of the masked area; the description information sending unit is configured to send the description information of the masked area to the peripheral unit; and the first video data receiving unit is configured to receive the non-masked area video data and the masked area video data that are sent by the peripheral unit and generated according to the description information of the masked area; or
      the monitoring platform further comprises: a setting request receiving unit, a second video data receiving unit, and a video data separating unit; the setting request receiving unit is configured to receive a masked area setting request sent by a second monitoring terminal, wherein the masked area setting request comprises a device identifier of the peripheral unit and description information of the masked area; the second video data receiving unit is configured to receive complete video data sent by the peripheral unit; and the video data separating unit is configured to obtain the masked area video data and the non-masked area video data by separating the complete video data received by the second video data receiving unit.
    9. The monitoring platform according to claim 8, further comprising: a storing unit and an association establishing unit, wherein:
      the storing unit is configured to store the masked area video data into a masked video file and store the non-masked area video data into a non-masked video file, wherein the masked video file comprises one or more video files;
      the association establishing unit is configured to establish an association between the masked video file and the non-masked video file;
      the video request receiving unit is specifically configured to receive a video request that comprises view time and is sent by the first monitoring terminal; and
      the acquiring unit is specifically configured to acquire video data corresponding to the view time from the non-masked video file, and further specifically configured to acquire, according to the association established by the association establishing unit, one or more video files that correspond to the first masked area and are associated with the non-masked video file, and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area when the determined result of the determining unit is yes.
    10. A peripheral unit, wherein the peripheral unit communicates with a monitoring platform through a transmission network, the peripheral unit comprising: a description information receiving unit, a video data encoding unit, and a video data sending unit, wherein:
      the description information receiving unit is configured to receive description information of a masked area for which different areas have respective permissions, wherein the description information is sent by the monitoring platform, wherein the description information of the masked area comprises a coordinate of the masked area;
      the video data encoding unit is configured to encode, according to the description information of the masked area, a captured video picture into non-masked area video data corresponding to a non-masked area and masked area video data corresponding to the masked area; and
      the video data sending unit is configured to send the non-masked area video data and the masked area video data to the monitoring platform.
    11. The peripheral unit according to claim 10, wherein:
      the video data encoding unit is specifically configured to: encode video pictures in the captured video picture into one channel of video data according to the description information of the masked area, wherein the video pictures correspond to the multiple areas comprised in the masked area, or encode video pictures in the captured video picture into one channel of video data each, wherein the video pictures correspond to the multiple areas comprised in the masked area, or encode video pictures in the captured video picture into one channel of video data, wherein the video pictures correspond to areas with same permission among the multiple areas comprised in the masked area; and further specifically configured to encode a video picture in the captured video picture into the non-masked area video data according to the description information of the masked area, wherein the video picture corresponds to the non-masked area.
    12. A video surveillance system, comprising: a monitoring terminal and a monitoring platform, wherein the monitoring platform communicates with the monitoring terminal through a transmission network:
      the monitoring terminal comprises a video request sending unit, a video data receiving unit, and a playing unit; wherein:
      the video request sending unit, is configured to send a video request to the monitoring platform, wherein the video request comprises a device identifier, and video data of a peripheral unit identified by the device identifier comprises non-masked area video data corresponding to a non-masked area and masked area video data corresponding to a masked area for which different areas have respective permissions;
      the video data receiving unit, is configured to receive first masked area video data and the non-masked area video data that are sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked area video data in the masked video data, wherein the first masked area video data corresponds to a first masked area having a respective permission, and the first masked area comprises a part or all of the masked area; and further configured to receive the non-masked area video data that is sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire the first masked area video data in the masked area video data; and
      the playing unit, is configured to merge and play the first masked area video data and the non-masked area video data that are received by the video data receiving unit, or configured to play the non-masked area video data received by the video data receiving unit; and
      the monitoring platform is specifically the monitoring platform according to any one of claims 8-9.
    13. The video surveillance system according to claim 12, further comprising a peripheral unit, wherein:
      the peripheral unit is specifically the peripheral unit according to claim 10 or 11.
    EP12872312.9A 2012-10-11 2012-10-11 Method, apparatus and system for implementing video occlusion Active EP2741237B1 (en)

    Applications Claiming Priority (1)

    Application Number Priority Date Filing Date Title
    PCT/CN2012/082784 WO2014056171A1 (en) 2012-10-11 2012-10-11 Method, apparatus and system for implementing video occlusion

    Publications (3)

    Publication Number Publication Date
    EP2741237A1 EP2741237A1 (en) 2014-06-11
    EP2741237A4 EP2741237A4 (en) 2014-07-16
    EP2741237B1 true EP2741237B1 (en) 2017-08-09

    Family

    ID=50476881

    Family Applications (1)

    Application Number Title Priority Date Filing Date
    EP12872312.9A Active EP2741237B1 (en) 2012-10-11 2012-10-11 Method, apparatus and system for implementing video occlusion

    Country Status (3)

    Country Link
    EP (1) EP2741237B1 (en)
    CN (1) CN103890783B (en)
    WO (1) WO2014056171A1 (en)

    Cited By (1)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    CN111614930A (en) * 2019-02-22 2020-09-01 浙江宇视科技有限公司 A video surveillance method, system, device and computer-readable storage medium

    Families Citing this family (17)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    CN105208340B (en) * 2015-09-24 2019-10-18 浙江宇视科技有限公司 Method and device for displaying video data
    KR102051985B1 (en) 2015-09-30 2019-12-04 애플 인크. Synchronization of Media Rendering in Heterogeneous Networking Environments
    CN105866853B (en) * 2016-04-13 2019-01-01 同方威视技术股份有限公司 Safety check supervisor control and safety check monitoring terminal
    CN106341664B (en) * 2016-09-29 2019-12-13 浙江宇视科技有限公司 A data processing method and device
    CN108206930A (en) * 2016-12-16 2018-06-26 杭州海康威视数字技术股份有限公司 The method and device for showing image is covered based on privacy
    CN110324704A (en) * 2018-03-29 2019-10-11 优酷网络技术(北京)有限公司 Method for processing video frequency and device
    CN109063499B (en) * 2018-07-27 2021-02-26 山东鲁能软件技术有限公司 Flexible configurable electronic file region authorization method and system
    US11030212B2 (en) * 2018-09-06 2021-06-08 International Business Machines Corporation Redirecting query to view masked data via federation table
    CN110958410A (en) * 2018-09-27 2020-04-03 北京嘀嘀无限科技发展有限公司 Video processing method and device and automobile data recorder
    CN113114548B (en) * 2020-07-07 2022-10-14 德能森智能科技(成都)有限公司 Terminal management method and system based on privacy management
    CN112954458A (en) * 2021-01-20 2021-06-11 浙江大华技术股份有限公司 Video occlusion method, device, electronic device and storage medium
    CN113014949B (en) * 2021-03-10 2022-05-06 读书郎教育科技有限公司 Student privacy protection system and method for smart classroom course playback
    US20230154497A1 (en) * 2021-11-18 2023-05-18 Parrot AI, Inc. System and method for access control, group ownership, and redaction of recordings of events
    CN114189660A (en) * 2021-12-24 2022-03-15 威艾特科技(深圳)有限公司 Monitoring method and system based on omnidirectional camera
    CN114419720B (en) * 2022-03-30 2022-10-18 浙江大华技术股份有限公司 Image occlusion method and system and computer readable storage medium
    CA3264600A1 (en) * 2022-08-31 2024-03-07 SimpliSafe, Inc. SECURITY DEVICE ZONES
    US12118864B2 (en) 2022-08-31 2024-10-15 SimpliSafe, Inc. Security device zones

    Citations (2)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    US6509926B1 (en) * 2000-02-17 2003-01-21 Sensormatic Electronics Corporation Surveillance apparatus for camera surveillance system
    FR2972886A1 (en) * 2011-03-17 2012-09-21 Thales Sa Method for compression of video sequence using video source coder, involves marking one of compressed flows with identifier that is not-interpretable by coder, and interlacing and synchronizing flows to obtain single compressed flow

    Family Cites Families (7)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    CA2592511C (en) * 2004-12-27 2011-10-11 Emitall Surveillance S.A. Efficient scrambling of regions of interest in an image or video to preserve privacy
    JP4671133B2 (en) * 2007-02-09 2011-04-13 富士フイルム株式会社 Image processing device
    CN101610396A (en) * 2008-06-16 2009-12-23 北京智安邦科技有限公司 Intellective video monitoring device module and system and method for supervising thereof with secret protection
    US8576282B2 (en) * 2008-12-12 2013-11-05 Honeywell International Inc. Security system with operator-side privacy zones
    CN101710979B (en) * 2009-12-07 2015-03-04 北京中星微电子有限公司 Method for managing video monitoring system and central management server
    CN101848378A (en) * 2010-06-07 2010-09-29 中兴通讯股份有限公司 Domestic video monitoring device, system and method
    CN102547212A (en) * 2011-12-13 2012-07-04 浙江元亨通信技术股份有限公司 Splicing method of multiple paths of video images

    Patent Citations (2)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    US6509926B1 (en) * 2000-02-17 2003-01-21 Sensormatic Electronics Corporation Surveillance apparatus for camera surveillance system
    FR2972886A1 (en) * 2011-03-17 2012-09-21 Thales Sa Method for compression of video sequence using video source coder, involves marking one of compressed flows with identifier that is not-interpretable by coder, and interlacing and synchronizing flows to obtain single compressed flow

    Cited By (1)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    CN111614930A (en) * 2019-02-22 2020-09-01 浙江宇视科技有限公司 A video surveillance method, system, device and computer-readable storage medium

    Also Published As

    Publication number Publication date
    CN103890783B (en) 2017-02-22
    EP2741237A4 (en) 2014-07-16
    CN103890783A (en) 2014-06-25
    WO2014056171A1 (en) 2014-04-17
    EP2741237A1 (en) 2014-06-11

    Similar Documents

    Publication Publication Date Title
    EP2741237B1 (en) Method, apparatus and system for implementing video occlusion
    US10594988B2 (en) Image capture apparatus, method for setting mask image, and recording medium
    US11023618B2 (en) Systems and methods for detecting modifications in a video clip
    JP5346338B2 (en) Method for indexing video and apparatus for indexing video
    CN111133764B (en) Information processing apparatus, information providing apparatus, control method, and storage medium
    KR102320455B1 (en) Method, device, and computer program for transmitting media content
    US20180176650A1 (en) Information processing apparatus and information processing method
    KR102133207B1 (en) Communication apparatus, communication control method, and communication system
    US10757463B2 (en) Information processing apparatus and information processing method
    JPWO2004004350A1 (en) Image data distribution system, image data transmitting apparatus and image data receiving apparatus thereof
    WO2021147702A1 (en) Video processing method and apparatus
    US12254044B2 (en) Video playing method, apparatus, and system, and computer storage medium
    CN108810567B (en) Audio and video visual angle matching method, client and server
    CN106657110A (en) Encrypted transmission method and apparatus of streaming data
    WO2022111554A1 (en) View switching method and apparatus
    JPWO2004004363A1 (en) Image encoding device, image transmitting device, and image photographing device
    CN107241585B (en) Video monitoring method and system
    CN113545099B (en) Information processing apparatus, reproduction processing apparatus, information processing method, and reproduction processing method
    CN110636336A (en) Transmitting apparatus and method, receiving apparatus and method, and computer-readable storage medium
    JP7218105B2 (en) File generation device, file generation method, processing device, processing method, and program
    WO2018044731A1 (en) Systems and methods for hybrid network delivery of objects of interest in video
    CN115695858B (en) SEI (solid-state imaging device) encryption-based virtual film-making video master film coding and decoding control method
    WO2024151732A1 (en) Protecting augmented reality call content
    JP2024165003A (en) Scene description editing device and program
    JP2024116702A (en) Scene description editing device and program

    Legal Events

    Date Code Title Description
    PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

    Free format text: ORIGINAL CODE: 0009012

    17P Request for examination filed

    Effective date: 20131003

    AK Designated contracting states

    Kind code of ref document: A1

    Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

    A4 Supplementary search report drawn up and despatched

    Effective date: 20140617

    RIC1 Information provided on ipc code assigned before grant

    Ipc: G06K 9/60 20060101AFI20140611BHEP

    Ipc: G08B 13/196 20060101ALI20140611BHEP

    17Q First examination report despatched

    Effective date: 20150630

    DAX Request for extension of the european patent (deleted)
    GRAP Despatch of communication of intention to grant a patent

    Free format text: ORIGINAL CODE: EPIDOSNIGR1

    INTG Intention to grant announced

    Effective date: 20170103

    GRAJ Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted

    Free format text: ORIGINAL CODE: EPIDOSDIGR1

    GRAP Despatch of communication of intention to grant a patent

    Free format text: ORIGINAL CODE: EPIDOSNIGR1

    INTG Intention to grant announced

    Effective date: 20170310

    GRAS Grant fee paid

    Free format text: ORIGINAL CODE: EPIDOSNIGR3

    GRAA (expected) grant

    Free format text: ORIGINAL CODE: 0009210

    AK Designated contracting states

    Kind code of ref document: B1

    Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

    REG Reference to a national code

    Ref country code: GB

    Ref legal event code: FG4D

    REG Reference to a national code

    Ref country code: CH

    Ref legal event code: EP

    Ref country code: AT

    Ref legal event code: REF

    Ref document number: 917570

    Country of ref document: AT

    Kind code of ref document: T

    Effective date: 20170815

    REG Reference to a national code

    Ref country code: IE

    Ref legal event code: FG4D

    REG Reference to a national code

    Ref country code: FR

    Ref legal event code: PLFP

    Year of fee payment: 6

    REG Reference to a national code

    Ref country code: DE

    Ref legal event code: R096

    Ref document number: 602012035854

    Country of ref document: DE

    REG Reference to a national code

    Ref country code: NL

    Ref legal event code: MP

    Effective date: 20170809

    REG Reference to a national code

    Ref country code: LT

    Ref legal event code: MG4D

    REG Reference to a national code

    Ref country code: AT

    Ref legal event code: MK05

    Ref document number: 917570

    Country of ref document: AT

    Kind code of ref document: T

    Effective date: 20170809

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: HR

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: NL

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: NO

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20171109

    Ref country code: AT

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: SE

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: LT

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: FI

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: RS

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: IS

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20171209

    Ref country code: LV

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: ES

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: BG

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20171109

    Ref country code: PL

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: GR

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20171110

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: CZ

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: DK

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: RO

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    REG Reference to a national code

    Ref country code: DE

    Ref legal event code: R097

    Ref document number: 602012035854

    Country of ref document: DE

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: IT

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: SM

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: MC

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: EE

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: SK

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    REG Reference to a national code

    Ref country code: CH

    Ref legal event code: PL

    PLBE No opposition filed within time limit

    Free format text: ORIGINAL CODE: 0009261

    STAA Information on the status of an ep patent application or granted ep patent

    Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

    26N No opposition filed

    Effective date: 20180511

    REG Reference to a national code

    Ref country code: IE

    Ref legal event code: MM4A

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: CH

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20171031

    Ref country code: LI

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20171031

    Ref country code: LU

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20171011

    REG Reference to a national code

    Ref country code: BE

    Ref legal event code: MM

    Effective date: 20171031

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: SI

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    Ref country code: BE

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20171031

    REG Reference to a national code

    Ref country code: FR

    Ref legal event code: PLFP

    Year of fee payment: 7

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: MT

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20171011

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: IE

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20171011

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: HU

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

    Effective date: 20121011

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: CY

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20170809

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: MK

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: TR

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: PT

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

    Ref country code: AL

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20170809

    REG Reference to a national code

    Ref country code: DE

    Ref legal event code: R079

    Ref document number: 602012035854

    Country of ref document: DE

    Free format text: PREVIOUS MAIN CLASS: G06K0009600000

    Ipc: G06V0030200000

    P01 Opt-out of the competence of the unified patent court (upc) registered

    Effective date: 20230524

    PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

    Ref country code: DE

    Payment date: 20240904

    Year of fee payment: 13

    PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

    Ref country code: GB

    Payment date: 20250904

    Year of fee payment: 14

    PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

    Ref country code: FR

    Payment date: 20250908

    Year of fee payment: 14