US10425608B2 - Image processing method and camera - Google Patents

Image processing method and camera Download PDF

Info

Publication number
US10425608B2
US10425608B2 US15/392,636 US201615392636A US10425608B2 US 10425608 B2 US10425608 B2 US 10425608B2 US 201615392636 A US201615392636 A US 201615392636A US 10425608 B2 US10425608 B2 US 10425608B2
Authority
US
United States
Prior art keywords
image
coordinates
camera
rule
intelligent analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/392,636
Other versions
US20170111604A1 (en
Inventor
Bo Zhou
Yongjin Cai
Xilei Xu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of US20170111604A1 publication Critical patent/US20170111604A1/en
Assigned to HUAWEI TECHNOLOGIES CO., LTD. reassignment HUAWEI TECHNOLOGIES CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: XU, Xilei, CAI, Yongjin, ZHOU, BO
Application granted granted Critical
Publication of US10425608B2 publication Critical patent/US10425608B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/44504Circuit details of the additional information generator, e.g. details of the character or graphics signal generator, overlay mixing circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19617Surveillance camera constructional details
    • G08B13/1963Arrangements allowing camera rotation to change view, e.g. pivoting camera, pan-tilt and zoom [PTZ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • H04N5/232
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present disclosure relates to the field of video surveillance, and in particular, to an image processing method and a camera.
  • Intelligent analysis is generally applied to a fixed digital camera, that is, a related intelligent analysis rule is set in the fixed digital camera, and then, an intelligent analysis function is normally used.
  • the digital camera needs to be fixed in order to ensure normal use of the intelligent analysis function when intelligent analysis is used on a digital camera provided with a pan tilt zoom (PTZ) (for example PTZ full-sphere (up and down, left and right) moving and lens zoom, and zoom control) function.
  • PTZ pan tilt zoom
  • the intelligent analysis rule that is set previously may fail and cannot function.
  • Embodiments of the present disclosure provide an image processing method and a camera such that when a camera with a PTZ function is rotated and/or zooms in/out, a relative position of an intelligent analysis rule relative to a corresponding reference object can remain unchanged.
  • an image processing method includes receiving an image operation instruction, where the image operation instruction includes performing at least one of a rotation operation or a zoom operation, performing an operation on an image according to the image operation instruction, where the image is overlaid with a rule which is a tripwire or a geometric region and includes a reference object for the rule, and displaying the rule on the image on which the operation has been performed such that a relative position of the rule relative to the reference object remains unchanged before and after the operation.
  • displaying the rule on the image on which the operation has been performed such that a relative position of the rule relative to the reference object remains unchanged before and after the operation includes establishing a coordinate system, and calculating coordinates of the rule in the coordinate system after the operation, and displaying, according to the coordinates of the rule after the operation, the rule on the image on which the operation has been performed such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
  • calculating coordinates of the rule after the operation includes calculating an operation parameter, where the operation parameter includes at least one of a rotation angle or a zoom ratio, and calculating, according to the operation parameter, the coordinates of the rule after the operation.
  • the image operation instruction is performing a rotation operation
  • calculating an operation parameter includes calculating a rotation angle
  • calculating, according to the operation parameter, the coordinates of the rule after the operation includes calculating, according to the calculated rotation angle, the coordinates of the rule after the operation.
  • calculating, according to the calculated rotation angle, the coordinates of the rule after the operation includes selecting a reference point in the rule and determining coordinates of the reference point before an operation, calculating, according to the calculated rotation angle and the coordinates of the reference point before the operation, coordinates of a dome camera that are corresponding to a current picture when the reference point is rotated to a picture center, calculating, according to the coordinates of the dome camera that are corresponding to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation, and determining, according to the coordinates of the reference point after the operation, the coordinates of the rule after the operation.
  • the image operation instruction is performing a zoom operation.
  • Calculating an operation parameter includes calculating a zoom ratio, and calculating, according to the operation parameter, the coordinates of the rule after the operation includes calculating, according to the calculated zoom ratio, the coordinates of the rule after the operation.
  • the image operation instruction is performing a rotation operation and a zoom operation.
  • Calculating an operation parameter includes calculating a rotation angle and a zoom ratio, and calculating, according to the operation parameter, the coordinates of the rule after the operation includes calculating, according to the rotation angle and the zoom ratio that are calculated, the coordinates of the rule after the operation.
  • calculating, according to the rotation angle and the zoom ratio that are calculated, the coordinates of the rule after the operation includes selecting a reference point in the rule and determining coordinates of the reference point before an operation, calculating, according to the rotation angle and the zoom ratio that are calculated and the coordinates of the reference point before the operation, coordinates of a dome camera that are corresponding to a current picture when the reference point is rotated to a picture center, calculating, according to the coordinates of the dome camera that are corresponding to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation, and determining, according to the coordinates of the reference point after the operation, the coordinates of the rule after the operation.
  • the image on which the operation has been performed is a first image
  • the method further includes setting an effective condition for the first image, and redisplaying the first image when the effective condition is satisfied.
  • the method further includes displaying an image that is obtained after the subsequent operation is performed when the effective condition is not satisfied and a subsequent operation is performed on the image.
  • the effective condition includes effective time.
  • a camera including a central processing unit (CPU) configured to receive an image operation instruction, where the image operation instruction includes performing at least one of a rotation operation or a zoom operation, where the CPU is further configured to perform an operation on an image according to the image operation instruction, where the image is overlaid with a rule which is a tripwire or a geometric region and includes a reference object for the rule, and an encoding processor is configured to display the rule on the image on which the CPU has performed the operation such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
  • CPU central processing unit
  • the CPU is further configured to calculate coordinates of the rule in a pre-established coordinate system after the operation
  • the encoding processor is further configured to display, according to the coordinates of the rule after the operation, the rule on the image on which the operation has been performed such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
  • the CPU is further configured to acquire an operation parameter, where the operation parameter includes at least one of a rotation angle or a zoom ratio, and calculate, according to the operation parameter, the coordinates of the rule in the pre-established coordinate system after the operation.
  • the camera further includes a motor control board, where when the image operation instruction is performing a rotation operation, the motor control board is configured to calculate a rotation angle, and notify the CPU of the calculated rotation angle, and the CPU is further configured to calculate, according to the rotation angle notified by the motor control board, the coordinates of the rule in the pre-established coordinate system after the operation.
  • the CPU is further configured to determine coordinates of a preselected reference point in the rule before an operation, calculate, according to the calculated rotation angle and the coordinates of the reference point before the operation, coordinates of a dome camera that are corresponding to a current picture when the reference point is rotated to a picture center, calculate, according to the coordinates of the dome camera that are corresponding to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation, and determine, according to the coordinates of the reference point after the operation, the coordinates of the rule after the operation.
  • the encoding processor is further configured to calculate a zoom ratio
  • the CPU is further configured to calculate, according to the zoom ratio calculated by the encoding processor, the coordinates of the rule after the operation.
  • the camera further includes a motor control board, where when the image operation instruction is performing a rotation operation and a zoom operation, the motor control board is configured to calculate a rotation angle.
  • the encoding processor is further configured to calculate a zoom ratio when the image operation instruction is performing a rotation operation and a zoom operation, and the CPU is further configured to calculate, according to the rotation angle calculated by the motor control board and the zoom ratio calculated by the encoding processor, the coordinates of the rule after the operation.
  • the CPU is further configured to select a reference point in the rule and determine coordinates of the reference point before an operation, calculate, according to the rotation angle and the zoom ratio that are calculated and the coordinates of the reference point before the operation, coordinates of a dome camera that are corresponding to a current picture when the reference point is rotated to a picture center, calculate, according to the coordinates of the dome camera that are corresponding to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation, and determine, according to the coordinates of the reference point after the operation, the coordinates of the rule after the operation.
  • the CPU is further configured to set an effective condition for the image on which the operation has been performed, and redisplay the first image when the effective condition is satisfied.
  • a rule on the image is adjusted such that a relative position of the rule, displayed on the image on which the operation has been performed, relative to a reference object remains unchanged before and after the operation.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure
  • FIG. 2A , FIG. 2B , and FIG. 2C are schematic effect diagram of an application of an image processing method according to an embodiment of the present disclosure
  • FIG. 3A , FIG. 3B , FIG. 3C , and FIG. 3D are schematic effect diagram when an effective condition is set;
  • FIG. 4 is a control logic flowchart of an image processing method according to an embodiment of the present disclosure.
  • FIG. 5 is an optical imaging diagram
  • FIG. 6A is a structural block diagram of a camera according to an embodiment of the present disclosure.
  • FIG. 6B is a structural block diagram of another camera according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of a hardware logical architecture of a camera according to an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure.
  • this embodiment of the present disclosure provides an image processing method, which is described based on a camera. The method includes the following steps.
  • Step 11 Receive an image operation instruction, where the image operation instruction includes performing at least one of a rotation operation or a zoom operation.
  • the image operation instruction is used to instruct to perform an operation on an image in a camera lens.
  • the image operation instruction may be a rotation operation instruction used to perform a rotation operation on the image and a zoom operation instruction used to perform a zoom operation on the image.
  • performing a rotation operation on the image refers to rotating the camera lens to rotate the image.
  • the image operation instruction may be sent by a user according to actual requirements.
  • Step 12 Perform an operation on an image according to the image operation instruction, where the image is overlaid with a rule which is a tripwire or a geometric region and includes a reference object for the rule.
  • the camera When the user instructs a camera to perform the rotation operation, the camera performs a corresponding rotation operation on the image after receiving the rotation operation instruction.
  • the rotation operation may be, for example, rotating to the left or right, or rotating downwards or upwards. Rotating to the left is used as an example.
  • the camera controls the lens to rotate to the left when the user instructs the camera to rotate to the left, and in this case, an image displayed in the lens correspondingly changes.
  • an image displayed in the camera lens also needs to be changed, that is, corresponding zoom adjustment needs to be performed on the image.
  • the image displayed in the camera lens is overlaid with an intelligent analysis rule, and the intelligent analysis rule may be a tripwire rule or a geometric region rule.
  • the reference object in this embodiment of the present disclosure is relative to the rule, and refers to a person or an object of interest on the image shot by the camera lens, that is, a person or an object of interest on the shot image is selected as a reference object.
  • the reference object is related to the intelligent analysis rule that is overlaid on the image and that is relative to the reference object.
  • different intelligent analysis rules may be set respectively for different reference objects on the image.
  • Step 13 Display the rule on the image on which the operation has been performed such that a relative position of the rule relative to the reference object remains unchanged before and after the operation.
  • the reference object and the rule related to the reference object still exist on the image on which the operation has been performed, the reference object and the rule related to the reference object are displayed on the image on which the operation has been performed such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
  • FIG. 2A , FIG. 2B , and FIG. 2C are schematic effect diagram of the image processing method according to this embodiment of the present disclosure.
  • FIG. 2A , FIG. 2B , and FIG. 2C gives a description using an example in which the rule is a tripwire.
  • reference objects in FIG. 2A , FIG. 2B , and FIG. 2C are 101 and 102
  • the tripwire rule relative to the reference objects 101 and 102 is 103 .
  • FIG. 2A displays an original image including the tripwire rule before the rotation operation starts.
  • the tripwire rule 103 is set between the reference objects 101 and 102 .
  • FIG. 2B displays an image that is obtained after the original image is rotated to the left by an angle (it is assumed that the angle is 30 degrees) if the technical solutions of the present disclosure are not used. It can be known from FIG. 2B that after the original image is rotated, the tripwire rule 103 is not located between the reference objects 101 and 102 , that is, the relative position of the tripwire rule 103 relative to the reference object 101 and the relative position of the tripwire rule 103 relative to the reference object 102 have changed.
  • FIG. 1 displays an original image including the tripwire rule before the rotation operation starts.
  • FIG. 2C displays an image that is obtained after the original image is rotated to the left by an angle (it is assumed that the angle is 30 degrees) if the technical solutions of the present disclosure are used. It can be known from FIG. 2C that after the original image is rotated, the tripwire rule 103 is still located between the reference objects 101 and 102 , that is, it is ensured that the relative position of the tripwire rule 103 relative to the reference object 101 and the relative position of the tripwire rule 103 relative to the reference object 102 remain unchanged before and after the operation.
  • the rule corresponding to the reference object may be adjusted first.
  • the adjustment may be, for example, clearing the rule that corresponds to the reference object and that is before the operation, calculating a rule of the reference object after the operation, and displaying the calculated rule of the reference object on the image on which the operation has been performed such that the relative position of the calculated rule, relative to the reference object, of the reference object remains unchanged before and after the operation, or skipping clearing the rule that corresponds to the reference object and that is before the operation, and instead, moving the rule that corresponds to the reference object and that is before the operation to an appropriate position on the image on which the operation has been performed such that the relative position of the rule, relative to the reference object, corresponding to the reference object remains unchanged before and after the operation.
  • a rule on the image is adjusted such that a relative position of the rule, displayed on the image on which the operation has been performed, relative to a reference object remains unchanged before and after the operation.
  • step 13 displaying the rule on the image on which the operation has been performed such that a relative position of the rule relative to the reference object remains unchanged before and after the operation may include establishing a coordinate system, and calculating coordinates of the rule in the coordinate system after the operation, and displaying, according to the coordinates of the rule after the operation, the rule on the image on which the operation has been performed such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
  • the coordinate system may be a two-dimensional coordinate system, and the two dimensions may be a horizontal direction and a vertical direction.
  • a sensor center that is, a picture center
  • coordinates of points and coordinates of a rule on the image of the camera before and after the operation may be recorded.
  • only some representative points on the rule may be selected to replace the rule. For example, for a tripwire rule, coordinates of two end points of the tripwire may be selected, and not coordinates of all points on the whole tripwire are selected. Similarly, for a geometric region rule, such as a rectangle rule, only coordinates of four end points of the rectangle may be selected. For another example, for a triangle rule, only coordinates of three end points of the triangle may be selected. A manner of selecting coordinate points of other geometric region rules is similar to those described above, and details are not described herein again.
  • coordinates of the points on the rule and on the image before and after the operation may be uniquely determined by means of establishing the coordinate system.
  • the rule may be displayed on the image on which the operation has been performed such that a relative position of the rule relative to the related reference object remains unchanged before and after the operation.
  • the calculating the coordinates of the rule after the operation may include calculating an operation parameter, where the operation parameter includes at least one of a rotation angle or a zoom ratio, and calculating, according to the operation parameter, the coordinates of the rule after the operation.
  • the image operation instruction may include the following three types a rotation operation instruction, a zoom operation instruction, and a rotation operation and a zoom operation instruction.
  • the three cases are separately described in the following.
  • the image operation instruction is performing a rotation operation.
  • calculating an operation parameter includes calculating a rotation angle
  • calculating, according to the operation parameter, the coordinates of the rule after the operation includes calculating, according to the calculated rotation angle, the coordinates of the rule after the operation.
  • the image operation instruction is performing a zoom operation
  • calculating an operation parameter includes calculating a zoom ratio
  • calculating, according to the operation parameter, the coordinates of the rule after the operation includes calculating, according to the calculated zoom ratio, the coordinates of the rule after the operation.
  • the image operation instruction is performing a rotation operation and a zoom operation.
  • Calculating an operation parameter includes calculating a rotation angle and a zoom ratio, and calculating, according to the operation parameter, the coordinates of the rule after the operation includes calculating, according to the rotation angle and the zoom ratio that are calculated, the coordinates of the rule after the operation.
  • a specific operation parameter is calculated in the foregoing manner, and then the coordinates of the rule in the coordinate system after the operation may be calculated according to the specific operation parameter. In this way, it is finally ensured that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
  • an effective condition may be set for an image displayed after the operation, and the image is redisplayed when the effective condition is satisfied.
  • the image on which the operation has been performed may be referred to as a first image
  • an image that is obtained after an operation is performed on the first image is referred to as a second image
  • an image that is obtained after an operation is performed on the second image is referred to as a third image
  • the rest may be deduced by analogy.
  • the image processing method provided in this embodiment of the present disclosure may also include setting an effective condition for the first image, and redisplaying the first image when the effective condition is satisfied.
  • effective conditions may also be set.
  • the second image or the third image may also be redisplayed when the effective condition is satisfied.
  • an image that is obtained after the subsequent operation is performed is displayed. Further, if an effective condition is set for the first image, when the first image does not satisfy the effective condition and an operation is performed on the first image according to an image operation instruction at this time, an image that is obtained after the operation is performed on the first image is directly displayed. When an operation is performed subsequently according to the image operation instruction, once the effective condition for the first image is satisfied, the first image is directly displayed.
  • FIG. 3A displays an original image and a tripwire rule overlaid on the image, and an effective condition 1 for redisplaying the image is set in FIG. 3A .
  • FIG. 3B displays an image that is obtained after the image in FIG. 3A is rotated, and an effective condition 2 for redisplaying the image is also set in FIG. 3B .
  • FIG. 3C shows that when the effective condition 1 set in FIG. 3A is satisfied, the image in FIG. 3A is redisplayed and it is ensured that the relative position of the rule, overlaid on the image, relative to the reference object remains unchanged (that is, FIG. 3C and FIG. 3A are exactly the same).
  • FIG. 3C shows that when the effective condition 1 set in FIG. 3A is satisfied, the image in FIG. 3A is redisplayed and it is ensured that the relative position of the rule, overlaid on the image, relative to the reference object remains unchanged (that is, FIG. 3C and FIG. 3A are exactly the same).
  • FIG. 3D shows that when the effective condition 2 set in FIG. 3B is satisfied, the image in FIG. 3B is redisplayed and it is ensured that the relative position of the rule, overlaid on the image, relative to the reference object remains unchanged (that is, FIG. 3D and FIG. 3B are the same).
  • the effective condition includes but is not limited to effective time.
  • the effective time may be a time length from a current moment, display time of a current image that exceeds a preset time length, or the like.
  • effective time may be set for each image on which the operation has been performed (for example, FIG. 3A and FIG. 3B ), and the image correspondingly displayed when the effective time is satisfied. Moreover, it is ensured that the relative position of the rule, overlaid on the image, relative to the reference object remains unchanged ( FIG. 3C , FIG. 3D . . . ). In this way, different rules may be used in different conditions.
  • the camera may include a lens, a sensor, an encoding processor, a CPU, a motor control board, and a control motor.
  • the control motor includes at least one of a left control motor, a right control motor, an upper control motor, or a lower control motor.
  • the motor control board controls (or the CPU directly implements control) the control motor.
  • Related coordinates are recorded and calculated by the motor control board (or the CPU), and fed back to the processor in time.
  • the encoding processor may acquire a zoom ratio for controlling a lens, and may feed the zoom ratio back to the CPU. It needs to be pointed out that, the encoding processor shown in the flowchart of this embodiment of the present disclosure includes a lens, a sensor, and an encoding processor, that is, the encoding processor can perform zoom processing in an integrated manner.
  • An embodiment of the present disclosure provides a flowchart of an image processing method. Referring to FIG. 4 , the method includes the following steps.
  • Step 21 A user sets a tripwire or geometric rule in a CPU using a client of a management server or a client of a digital camera.
  • Step 22 The CPU stores the corresponding rule to a memory, and sets the rule in an encoding processor.
  • Step 23 The encoding processor overlays the rule on an image, and returns a setting success message to the CPU.
  • the rule overlaid on the image is relative to a reference object.
  • the image includes a reference object related to the rule.
  • Step 24 The user delivers, using the client of the management server or the client of the digital camera, an image operation instruction of performing rotation and zoom on the image.
  • this embodiment is described using an image operation instruction of rotation and zoom as an example.
  • the instruction may also be only a rotation operation or a zoom operation.
  • a coordinate system may be pre-established, that is, a two-dimensional coordinate is established using a sensor center as an origin of the two-dimensional coordinate system.
  • Both a zoom ratio and an initial angle of the camera are recorded and determined, where the motor control board and the encoding processor are used respectively for notifying the CPU of the initial angle and the zoom ratio such that the CPU performs storing and processing.
  • Step 25 After receiving a corresponding command, the CPU notifies both the motor control board and the encoding processor.
  • Step 26 The motor control board controls a motor to rotate with a scale, calculates new coordinates of the rule on the image when controlling the motor to rotate with the scale, and feeds the new coordinates back to the CPU in real time.
  • the coordinates of the rule are relative to the pre-established two-dimensional coordinate.
  • a manner of establishing the coordinate system includes but is not limited to using a sphere center point of a sphere formed by the rotation of the camera as the origin of the two-dimensional coordinate system.
  • Step 27 The encoding processor performs zoom processing, calculates a zoom ratio simultaneously, and feeds the zoom ratio back to the CPU in real time.
  • Step 28 After acquiring the coordinates of the rule and the zoom ratio, the CPU calculates, according to the coordinates and the zoom ratio, final coordinates of the rule after the operation, and notifies the encoding processor of the final coordinates of the rule.
  • Step 29 The encoding processor displays the rule on the image on which the operation has been performed such that a relative position of the rule relative to the reference object remains unchanged before and after the operation.
  • the coordinates of the dome camera are a concept of relative coordinates, and may be an angle coordinate that is established using the sphere center point of the sphere formed by the rotation of the camera as the origin and using a direction of the dome camera as a reference.
  • Coordinates of a reference object refer to two-dimensional coordinates established using a picture center as an origin.
  • a detailed algorithm of enabling the relative position of the rule (including a tripwire or a region) relative to the reference object for the rule to remain unchanged may be considered from two aspects, that is, a rotation operation case and a zoom operation case.
  • some particular reference points on the rule may be selected to replace the whole rule, and the coordinates of the rule can be determined by determining the coordinates of the reference points.
  • an algorithm in the rotation operation case includes a detailed algorithm of moving a reference point to a picture center (the sensor center) and a detailed algorithm of calculating coordinates of the moved reference point after the dome camera is rotated.
  • Rotation of a camera module may be divided into two types, that is, a vertical rotation and a horizontal rotation.
  • the vertical rotation may be processed first, and a method of processing the horizontal rotation is similar to the method of processing the vertical rotation, and the camera module is instructed to rotate by an angle when a corresponding angle by which the camera module needs to rotate is obtained.
  • a horizontal direction as shown in an optical imaging diagram in FIG. 5 , a reference point A′ is selected from a rule. Assuming that the reference point A′ actually corresponds to a real object A (that is, A′ is an image of the real object A on a sensor), a distance of the object A from a theoretical optical lens is L, and a distance by which the object A deviates from a central line of an optical system is D, the lens moves the point A to a sensor center position, and an angle by which the camera module needs to rotate horizontally is arctg(D/L).
  • an angle by which the camera module needs to rotate in a vertical direction as arctg(k2*H/f), where H is a physical height of the sensor, and k2 represents a ratio of a distance of the reference point from the picture center in a vertical direction to a picture height.
  • the detailed algorithm of calculating the position of the moved reference point after the dome camera is rotated includes calculating coordinates of the dome camera when the reference point is rotated to the sensor center, and calculating, according to the coordinates of the dome camera when the reference point is rotated to the sensor center, coordinates of the reference point on the sensor after the rotation.
  • a reference point is still selected from a region or a tripwire. Supposing that initial coordinates of the reference point that is before the rotation are (x1, y1), it can be known that PTZ coordinates of the dome camera are (p1, q1). The coordinates of the reference point need to be recalculated when the dome camera rotates to new coordinates (p2, q2), where p1 in the foregoing is a horizontal angle coordinate, q1 is a vertical angle coordinate, a focal length corresponding to the coordinates (p1, q1) is f1, and a focal length corresponding to (p2, q2) is f2.
  • the reference point (x1, y1) may be rotated to the sensor center.
  • p1 ⁇ p0 arctg((x1 ⁇ x0)/a total quantity of horizontal pixels on the picture*W/f1)
  • q1 ⁇ q0 arctg((y1 ⁇ y0)/a total quantity of vertical pixels on the picture*H/f1), where (x1 ⁇ x0)/a total quantity of horizontal pixels on the picture is k1 mentioned above.
  • the focal length f1 corresponding to the coordinates (p1, q1) may be determined according to a zoom ratio, the coordinates (p0, q0) of the dome camera may be calculated.
  • (x2, y2) that is, the coordinate position of the reference point on the sensor after the rotation, may be calculated according to the foregoing formula.
  • the coordinates of the reference point after the operation may be directly determined using the foregoing algorithm when an image operation instruction is performing a rotation operation and a zoom operation.
  • the coordinates of the reference point after the operation can be determined, and further, the coordinates of the rule are determined and the rule is displayed on the image on which the operation has been performed such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
  • FIG. 6A is a structural block diagram of a camera according to an embodiment of the present disclosure.
  • the camera 60 includes a CPU 61 and an encoding processor 62 , where the CPU 61 is configured to receive an image operation instruction, where the image operation instruction includes performing at least one of a rotation operation or a zoom operation.
  • the CPU 61 is further configured to perform an operation on an image according to the image operation instruction, where the image is overlaid with a rule which is a tripwire or a geometric region and includes a reference object for the rule, and the encoding processor 62 is configured to display the rule on the image on which the CPU 61 has performed the operation such that a relative position of the rule relative to the reference object remains unchanged before and after the operation.
  • the CPU 61 is further configured to calculate coordinates of the rule in a pre-established coordinate system after the operation
  • the encoding processor 62 is further configured to display, according to the coordinates of the rule after the operation, the rule on the image on which the operation has been performed such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
  • the CPU 61 may be further configured to acquire an operation parameter, where the operation parameter includes at least one of a rotation angle or a zoom ratio, and calculate, according to the operation parameter, the coordinates of the rule in the pre-established coordinate system after the operation.
  • the camera 60 further includes a motor control board 63 , where when the image operation instruction is performing a rotation operation, the motor control board 63 is configured to calculate a rotation angle, and notify the CPU 61 of the calculated rotation angle, and the CPU 61 is further configured to calculate, according to the rotation angle notified by the motor control board 63 , the coordinates of the rule in the pre-established coordinate system after the operation.
  • the CPU 61 may be further configured to determine coordinates of a preselected reference point in the rule before an operation, calculate, according to the calculated rotation angle and the coordinates of the reference point before the operation, coordinates of a dome camera that are corresponding to a current picture when the reference point is rotated to a picture center, calculate, according to the coordinates of the dome camera that are corresponding to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation, and determine, according to the coordinates of the reference point after the operation, the coordinates of the rule after the operation.
  • the encoding processor 62 is further configured to calculate a zoom ratio
  • the CPU 61 may be further configured to calculate, according to the zoom ratio calculated by the encoding processor 62 , the coordinates of the rule after the operation.
  • the camera 60 further includes a motor control board 63 , where when the image operation instruction is performing a rotation operation and a zoom operation, the motor control board 63 is configured to calculate a rotation angle.
  • the encoding processor 62 is further configured to calculate a zoom ratio when the image operation instruction is performing a rotation operation and a zoom operation, and the CPU 61 is further configured to calculate, according to the rotation angle calculated by the motor control board 63 and the zoom ratio calculated by the encoding processor 62 , the coordinates of the rule after the operation.
  • the CPU 61 may be further configured to select a reference point in the rule and determine coordinates of the reference point before an operation, calculate, according to the rotation angle and the zoom ratio that are calculated and the coordinates of the reference point before the operation, coordinates of a dome camera that are corresponding to a current picture when the reference point is rotated to a picture center, calculate, according to the coordinates of the dome camera that are corresponding to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation, and determine, according to the coordinates of the reference point after the operation, the coordinates of the rule after the operation.
  • the CPU 61 is further configured to set an effective condition for the image on which the operation has been performed, and redisplay the first image when the effective condition is satisfied.
  • FIG. 6A and FIG. 6B only show some key components of the camera that are mainly involved in the present disclosure. Such display is to better highlight emphasis of the present disclosure, and does not represent that the camera 60 is only provided with the components shown in the figure.
  • the camera 70 may include a CPU 71 , an encoding processor 72 , a motor control board 73 , and a motor 74 .
  • the CPU 71 receives an image operation instruction, and instructs, according to the image operation instruction, the encoding processor 72 and/or the motor control board 73 to perform corresponding operations. For example, when the image operation instruction is performing a rotation and zoom operation, the CPU 71 notifies the motor control board 73 of a rotation instruction, and notifies the encoding processor 72 of a zoom instruction. After receiving the instruction of the CPU 71 , the motor control board 73 controls the motor 74 to rotate.
  • the motor 74 may include at least one of a left control motor, a right control motor, an upper control motor, or a lower control motor, and is configured to control a camera lens to rotate such that a lens image rotates.
  • the encoding processor 72 controls the lens to perform zoom processing.
  • the motor control board 73 controls (or the CPU 71 directly implements control) the motor 74 .
  • Related coordinates are recorded and calculated by the motor control board 73 (or the CPU 71 ), and fed back to the CPU 71 in real time.
  • the encoding processor 72 may acquire a zoom ratio for controlling a lens, and may feed the zoom ratio back to the CPU 71 .
  • the CPU 71 is configured to calculate, according to the coordinates fed back by the motor control board 73 and the zoom ratio fed back by the encoding processor 72 , coordinates of a rule after the operation, and instructs the encoding processor 72 to display, on the lens, the rule after the operation.
  • the camera in this embodiment of the present disclosure may also include other components, and the other components may be, for example, a lens and a sensor.
  • the other components function regularly, and are not described herein.
  • the camera provided in the foregoing embodiment and the embodiment of the image processing method belong to a same conception.
  • the program may be stored in a computer-readable storage medium.
  • the storage medium may include a read-only memory, a magnetic disk, an optical disc, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

An image processing method and a camera are provided to implement that a relative position of an intelligent analysis rule relative to a corresponding reference object can remain unchanged when a camera with a pan tilt zoom (PTZ) function is rotated and/or zooms in/out. The method includes receiving an image operation instruction, where the image operation instruction includes performing at least one of a rotation operation or a zoom operation, performing an operation on an image according to the image operation instruction, where the image is overlaid with a rule which is a tripwire or a geometric region and includes a reference object, and displaying the rule on the image on which the operation has been performed such that a relative position of the rule relative to the reference object remains unchanged before and after the operation.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation of International Patent Application No. PCT/CN2015/082535 filed on Jun. 26, 2015, which claims priority to Chinese Patent Application No. 201410305120.2 filed on Jun. 30, 2014. The disclosures of the aforementioned applications are hereby incorporated by reference in their entireties.
TECHNICAL FIELD
The present disclosure relates to the field of video surveillance, and in particular, to an image processing method and a camera.
BACKGROUND
In the field of video surveillance, because of inconvenience brought by manual surveillance and an increasingly mature intelligent analysis algorithm, intelligent analysis is more widely applied.
Intelligent analysis is generally applied to a fixed digital camera, that is, a related intelligent analysis rule is set in the fixed digital camera, and then, an intelligent analysis function is normally used. The digital camera needs to be fixed in order to ensure normal use of the intelligent analysis function when intelligent analysis is used on a digital camera provided with a pan tilt zoom (PTZ) (for example PTZ full-sphere (up and down, left and right) moving and lens zoom, and zoom control) function. However, it is obviously a waste of resources. If the digital camera is not fixed, when the camera is rotated or zooms in/out, the intelligent analysis rule that is set previously may fail and cannot function.
SUMMARY
Embodiments of the present disclosure provide an image processing method and a camera such that when a camera with a PTZ function is rotated and/or zooms in/out, a relative position of an intelligent analysis rule relative to a corresponding reference object can remain unchanged.
According to a first aspect, an image processing method is provided, where the method includes receiving an image operation instruction, where the image operation instruction includes performing at least one of a rotation operation or a zoom operation, performing an operation on an image according to the image operation instruction, where the image is overlaid with a rule which is a tripwire or a geometric region and includes a reference object for the rule, and displaying the rule on the image on which the operation has been performed such that a relative position of the rule relative to the reference object remains unchanged before and after the operation.
With reference to the first aspect, in a first implementation manner of the first aspect, displaying the rule on the image on which the operation has been performed such that a relative position of the rule relative to the reference object remains unchanged before and after the operation includes establishing a coordinate system, and calculating coordinates of the rule in the coordinate system after the operation, and displaying, according to the coordinates of the rule after the operation, the rule on the image on which the operation has been performed such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, calculating coordinates of the rule after the operation includes calculating an operation parameter, where the operation parameter includes at least one of a rotation angle or a zoom ratio, and calculating, according to the operation parameter, the coordinates of the rule after the operation.
With reference to the second implementation manner of the first aspect, in a third implementation manner of the first aspect, the image operation instruction is performing a rotation operation, calculating an operation parameter includes calculating a rotation angle, and calculating, according to the operation parameter, the coordinates of the rule after the operation includes calculating, according to the calculated rotation angle, the coordinates of the rule after the operation.
With reference to the third implementation manner of the first aspect, in a fourth implementation manner of the first aspect, calculating, according to the calculated rotation angle, the coordinates of the rule after the operation includes selecting a reference point in the rule and determining coordinates of the reference point before an operation, calculating, according to the calculated rotation angle and the coordinates of the reference point before the operation, coordinates of a dome camera that are corresponding to a current picture when the reference point is rotated to a picture center, calculating, according to the coordinates of the dome camera that are corresponding to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation, and determining, according to the coordinates of the reference point after the operation, the coordinates of the rule after the operation.
With reference to the second implementation manner of the first aspect, in a fifth implementation manner of the first aspect, the image operation instruction is performing a zoom operation. Calculating an operation parameter includes calculating a zoom ratio, and calculating, according to the operation parameter, the coordinates of the rule after the operation includes calculating, according to the calculated zoom ratio, the coordinates of the rule after the operation.
With reference to the second implementation manner of the first aspect, in a sixth implementation manner of the first aspect, the image operation instruction is performing a rotation operation and a zoom operation. Calculating an operation parameter includes calculating a rotation angle and a zoom ratio, and calculating, according to the operation parameter, the coordinates of the rule after the operation includes calculating, according to the rotation angle and the zoom ratio that are calculated, the coordinates of the rule after the operation.
With reference to the sixth implementation manner of the first aspect, in a seventh implementation manner of the first aspect, calculating, according to the rotation angle and the zoom ratio that are calculated, the coordinates of the rule after the operation includes selecting a reference point in the rule and determining coordinates of the reference point before an operation, calculating, according to the rotation angle and the zoom ratio that are calculated and the coordinates of the reference point before the operation, coordinates of a dome camera that are corresponding to a current picture when the reference point is rotated to a picture center, calculating, according to the coordinates of the dome camera that are corresponding to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation, and determining, according to the coordinates of the reference point after the operation, the coordinates of the rule after the operation.
With reference to the first aspect, in an eighth implementation manner of the first aspect, the image on which the operation has been performed is a first image, and the method further includes setting an effective condition for the first image, and redisplaying the first image when the effective condition is satisfied.
With reference to the eighth possible implementation manner of the first aspect, in a ninth implementation manner of the first aspect, the method further includes displaying an image that is obtained after the subsequent operation is performed when the effective condition is not satisfied and a subsequent operation is performed on the image.
With reference to the eighth or ninth implementation manner of the first aspect, in a tenth implementation manner of the first aspect, the effective condition includes effective time.
According to a second aspect, a camera is provided, where the camera includes a central processing unit (CPU) configured to receive an image operation instruction, where the image operation instruction includes performing at least one of a rotation operation or a zoom operation, where the CPU is further configured to perform an operation on an image according to the image operation instruction, where the image is overlaid with a rule which is a tripwire or a geometric region and includes a reference object for the rule, and an encoding processor is configured to display the rule on the image on which the CPU has performed the operation such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the CPU is further configured to calculate coordinates of the rule in a pre-established coordinate system after the operation, and the encoding processor is further configured to display, according to the coordinates of the rule after the operation, the rule on the image on which the operation has been performed such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the CPU is further configured to acquire an operation parameter, where the operation parameter includes at least one of a rotation angle or a zoom ratio, and calculate, according to the operation parameter, the coordinates of the rule in the pre-established coordinate system after the operation.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, the camera further includes a motor control board, where when the image operation instruction is performing a rotation operation, the motor control board is configured to calculate a rotation angle, and notify the CPU of the calculated rotation angle, and the CPU is further configured to calculate, according to the rotation angle notified by the motor control board, the coordinates of the rule in the pre-established coordinate system after the operation.
With reference to the third possible implementation manner of the second aspect, in a fourth possible implementation manner of the second aspect, the CPU is further configured to determine coordinates of a preselected reference point in the rule before an operation, calculate, according to the calculated rotation angle and the coordinates of the reference point before the operation, coordinates of a dome camera that are corresponding to a current picture when the reference point is rotated to a picture center, calculate, according to the coordinates of the dome camera that are corresponding to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation, and determine, according to the coordinates of the reference point after the operation, the coordinates of the rule after the operation.
With reference to the second possible implementation manner of the second aspect, in a fifth possible implementation manner of the second aspect, the encoding processor is further configured to calculate a zoom ratio, and the CPU is further configured to calculate, according to the zoom ratio calculated by the encoding processor, the coordinates of the rule after the operation.
With reference to the second possible implementation manner of the second aspect, in a sixth possible implementation manner of the second aspect, the camera further includes a motor control board, where when the image operation instruction is performing a rotation operation and a zoom operation, the motor control board is configured to calculate a rotation angle. The encoding processor is further configured to calculate a zoom ratio when the image operation instruction is performing a rotation operation and a zoom operation, and the CPU is further configured to calculate, according to the rotation angle calculated by the motor control board and the zoom ratio calculated by the encoding processor, the coordinates of the rule after the operation.
With reference to the sixth possible implementation manner of the second aspect, in a seventh possible implementation manner of the second aspect, the CPU is further configured to select a reference point in the rule and determine coordinates of the reference point before an operation, calculate, according to the rotation angle and the zoom ratio that are calculated and the coordinates of the reference point before the operation, coordinates of a dome camera that are corresponding to a current picture when the reference point is rotated to a picture center, calculate, according to the coordinates of the dome camera that are corresponding to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation, and determine, according to the coordinates of the reference point after the operation, the coordinates of the rule after the operation.
With reference to the second aspect, in an eighth possible implementation manner of the second aspect, the CPU is further configured to set an effective condition for the image on which the operation has been performed, and redisplay the first image when the effective condition is satisfied.
By means of the foregoing technical solutions, in the image processing method and the camera that are provided in the embodiments of the present disclosure, when an image operation instruction is being received and a corresponding operation is being performed on an image, a rule on the image is adjusted such that a relative position of the rule, displayed on the image on which the operation has been performed, relative to a reference object remains unchanged before and after the operation.
BRIEF DESCRIPTION OF DRAWINGS
To describe the technical solutions in the embodiments of the present disclosure more clearly, the following briefly introduces the accompanying drawings required for describing the embodiments. The accompanying drawings in the following description show merely some embodiments of the present disclosure, and a person of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.
FIG. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure;
FIG. 2A, FIG. 2B, and FIG. 2C are schematic effect diagram of an application of an image processing method according to an embodiment of the present disclosure;
FIG. 3A, FIG. 3B, FIG. 3C, and FIG. 3D are schematic effect diagram when an effective condition is set;
FIG. 4 is a control logic flowchart of an image processing method according to an embodiment of the present disclosure;
FIG. 5 is an optical imaging diagram;
FIG. 6A is a structural block diagram of a camera according to an embodiment of the present disclosure;
FIG. 6B is a structural block diagram of another camera according to an embodiment of the present disclosure; and
FIG. 7 is a schematic diagram of a hardware logical architecture of a camera according to an embodiment of the present disclosure.
DESCRIPTION OF EMBODIMENTS
To make the objectives, technical solutions, and advantages of the present disclosure clearer, the following further describes the embodiments of the present disclosure in detail with reference to the accompanying drawings.
FIG. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure. Referring to FIG. 1, this embodiment of the present disclosure provides an image processing method, which is described based on a camera. The method includes the following steps.
Step 11: Receive an image operation instruction, where the image operation instruction includes performing at least one of a rotation operation or a zoom operation.
The image operation instruction is used to instruct to perform an operation on an image in a camera lens. The image operation instruction may be a rotation operation instruction used to perform a rotation operation on the image and a zoom operation instruction used to perform a zoom operation on the image. In this embodiment of the present disclosure, performing a rotation operation on the image refers to rotating the camera lens to rotate the image.
The image operation instruction may be sent by a user according to actual requirements.
Step 12: Perform an operation on an image according to the image operation instruction, where the image is overlaid with a rule which is a tripwire or a geometric region and includes a reference object for the rule.
When the user instructs a camera to perform the rotation operation, the camera performs a corresponding rotation operation on the image after receiving the rotation operation instruction. The rotation operation may be, for example, rotating to the left or right, or rotating downwards or upwards. Rotating to the left is used as an example. The camera controls the lens to rotate to the left when the user instructs the camera to rotate to the left, and in this case, an image displayed in the lens correspondingly changes.
Similarly, when the user instructs the camera to zoom in/out, an image displayed in the camera lens also needs to be changed, that is, corresponding zoom adjustment needs to be performed on the image.
In this embodiment of the present disclosure, the image displayed in the camera lens is overlaid with an intelligent analysis rule, and the intelligent analysis rule may be a tripwire rule or a geometric region rule.
“The reference object” in this embodiment of the present disclosure is relative to the rule, and refers to a person or an object of interest on the image shot by the camera lens, that is, a person or an object of interest on the shot image is selected as a reference object. The reference object is related to the intelligent analysis rule that is overlaid on the image and that is relative to the reference object. In addition, different intelligent analysis rules may be set respectively for different reference objects on the image.
Step 13: Display the rule on the image on which the operation has been performed such that a relative position of the rule relative to the reference object remains unchanged before and after the operation.
Further, in this step, if the reference object and the rule related to the reference object still exist on the image on which the operation has been performed, the reference object and the rule related to the reference object are displayed on the image on which the operation has been performed such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
FIG. 2A, FIG. 2B, and FIG. 2C are schematic effect diagram of the image processing method according to this embodiment of the present disclosure. FIG. 2A, FIG. 2B, and FIG. 2C gives a description using an example in which the rule is a tripwire. Referring to FIG. 2A, FIG. 2B, and FIG. 2C, reference objects in FIG. 2A, FIG. 2B, and FIG. 2C are 101 and 102, and the tripwire rule relative to the reference objects 101 and 102 is 103. In this embodiment of the present disclosure, it can be ensured that after a rotation or zoom operation is performed on the image, a relative position of the tripwire rule 103 relative to the reference object 101 and a relative position of the tripwire rule 103 relative to the reference object 102 remain unchanged.
Further, FIG. 2A displays an original image including the tripwire rule before the rotation operation starts. Referring to FIG. 2A, before the rotation, the tripwire rule 103 is set between the reference objects 101 and 102. FIG. 2B displays an image that is obtained after the original image is rotated to the left by an angle (it is assumed that the angle is 30 degrees) if the technical solutions of the present disclosure are not used. It can be known from FIG. 2B that after the original image is rotated, the tripwire rule 103 is not located between the reference objects 101 and 102, that is, the relative position of the tripwire rule 103 relative to the reference object 101 and the relative position of the tripwire rule 103 relative to the reference object 102 have changed. FIG. 2C displays an image that is obtained after the original image is rotated to the left by an angle (it is assumed that the angle is 30 degrees) if the technical solutions of the present disclosure are used. It can be known from FIG. 2C that after the original image is rotated, the tripwire rule 103 is still located between the reference objects 101 and 102, that is, it is ensured that the relative position of the tripwire rule 103 relative to the reference object 101 and the relative position of the tripwire rule 103 relative to the reference object 102 remain unchanged before and after the operation.
In this embodiment of the present disclosure, before displaying the rule in step 13, to ensure that the relative position of the rule relative to the reference object remains unchanged before and after the operation, the rule corresponding to the reference object may be adjusted first. The adjustment may be, for example, clearing the rule that corresponds to the reference object and that is before the operation, calculating a rule of the reference object after the operation, and displaying the calculated rule of the reference object on the image on which the operation has been performed such that the relative position of the calculated rule, relative to the reference object, of the reference object remains unchanged before and after the operation, or skipping clearing the rule that corresponds to the reference object and that is before the operation, and instead, moving the rule that corresponds to the reference object and that is before the operation to an appropriate position on the image on which the operation has been performed such that the relative position of the rule, relative to the reference object, corresponding to the reference object remains unchanged before and after the operation.
In the image processing method provided in this embodiment of the present disclosure, when an image operation instruction is being received and a corresponding operation is being performed on an image, a rule on the image is adjusted such that a relative position of the rule, displayed on the image on which the operation has been performed, relative to a reference object remains unchanged before and after the operation.
Optionally, in an embodiment of the present disclosure, in step 13, displaying the rule on the image on which the operation has been performed such that a relative position of the rule relative to the reference object remains unchanged before and after the operation may include establishing a coordinate system, and calculating coordinates of the rule in the coordinate system after the operation, and displaying, according to the coordinates of the rule after the operation, the rule on the image on which the operation has been performed such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
The coordinate system may be a two-dimensional coordinate system, and the two dimensions may be a horizontal direction and a vertical direction. In this embodiment of the present disclosure, a sensor center (that is, a picture center) may be used as an origin of the two-dimensional coordinate system. In this way, coordinates of points and coordinates of a rule on the image of the camera before and after the operation may be recorded.
In this embodiment of the present disclosure, only some representative points on the rule may be selected to replace the rule. For example, for a tripwire rule, coordinates of two end points of the tripwire may be selected, and not coordinates of all points on the whole tripwire are selected. Similarly, for a geometric region rule, such as a rectangle rule, only coordinates of four end points of the rectangle may be selected. For another example, for a triangle rule, only coordinates of three end points of the triangle may be selected. A manner of selecting coordinate points of other geometric region rules is similar to those described above, and details are not described herein again.
In this embodiment of the present disclosure, coordinates of the points on the rule and on the image before and after the operation may be uniquely determined by means of establishing the coordinate system. In this case, the rule may be displayed on the image on which the operation has been performed such that a relative position of the rule relative to the related reference object remains unchanged before and after the operation.
In this embodiment of the present disclosure, after a corresponding operation is performed on an image, coordinates of a reference object on the image and coordinates of a rule that is overlaid on the image and that is related to the reference object vary with the operation. Therefore, when a rotation operation and/or a zoom operation is performed, a rotation angle of the rotation operation and/or a zoom ratio of the zoom operation needs to be acquired in order to correspondingly adjust the rule related to the reference object. Optionally, in an embodiment, the calculating the coordinates of the rule after the operation may include calculating an operation parameter, where the operation parameter includes at least one of a rotation angle or a zoom ratio, and calculating, according to the operation parameter, the coordinates of the rule after the operation.
In this embodiment of the present disclosure, if image operation instructions are different, corresponding operation parameters are also different, that is, the operation parameter correspondingly varies with the image operation instruction.
In this embodiment of the present disclosure, the image operation instruction may include the following three types a rotation operation instruction, a zoom operation instruction, and a rotation operation and a zoom operation instruction. The three cases are separately described in the following.
In a first case, the image operation instruction is performing a rotation operation. In this case, calculating an operation parameter includes calculating a rotation angle, and calculating, according to the operation parameter, the coordinates of the rule after the operation includes calculating, according to the calculated rotation angle, the coordinates of the rule after the operation.
In a second case, the image operation instruction is performing a zoom operation, calculating an operation parameter includes calculating a zoom ratio, and calculating, according to the operation parameter, the coordinates of the rule after the operation includes calculating, according to the calculated zoom ratio, the coordinates of the rule after the operation.
In a third case, the image operation instruction is performing a rotation operation and a zoom operation. Calculating an operation parameter includes calculating a rotation angle and a zoom ratio, and calculating, according to the operation parameter, the coordinates of the rule after the operation includes calculating, according to the rotation angle and the zoom ratio that are calculated, the coordinates of the rule after the operation.
A specific operation parameter is calculated in the foregoing manner, and then the coordinates of the rule in the coordinate system after the operation may be calculated according to the specific operation parameter. In this way, it is finally ensured that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
Optionally, in another embodiment of the present disclosure, after each operation, an effective condition may be set for an image displayed after the operation, and the image is redisplayed when the effective condition is satisfied. Further, in this embodiment of the present disclosure, the image on which the operation has been performed may be referred to as a first image, an image that is obtained after an operation is performed on the first image is referred to as a second image, an image that is obtained after an operation is performed on the second image is referred to as a third image, and the rest may be deduced by analogy. In addition to steps 11 to 13, the image processing method provided in this embodiment of the present disclosure may also include setting an effective condition for the first image, and redisplaying the first image when the effective condition is satisfied.
Certainly, for images displayed after subsequent operations, for example, the second image and the third image, effective conditions may also be set. The second image or the third image may also be redisplayed when the effective condition is satisfied.
When the first image does not satisfy the effective condition and a subsequent operation is performed on the image, an image that is obtained after the subsequent operation is performed is displayed. Further, if an effective condition is set for the first image, when the first image does not satisfy the effective condition and an operation is performed on the first image according to an image operation instruction at this time, an image that is obtained after the operation is performed on the first image is directly displayed. When an operation is performed subsequently according to the image operation instruction, once the effective condition for the first image is satisfied, the first image is directly displayed.
Referring to FIG. 3A, FIG. 3B, FIG. 3C, and FIG. 3D, FIG. 3A displays an original image and a tripwire rule overlaid on the image, and an effective condition 1 for redisplaying the image is set in FIG. 3A. FIG. 3B displays an image that is obtained after the image in FIG. 3A is rotated, and an effective condition 2 for redisplaying the image is also set in FIG. 3B. FIG. 3C shows that when the effective condition 1 set in FIG. 3A is satisfied, the image in FIG. 3A is redisplayed and it is ensured that the relative position of the rule, overlaid on the image, relative to the reference object remains unchanged (that is, FIG. 3C and FIG. 3A are exactly the same). FIG. 3D shows that when the effective condition 2 set in FIG. 3B is satisfied, the image in FIG. 3B is redisplayed and it is ensured that the relative position of the rule, overlaid on the image, relative to the reference object remains unchanged (that is, FIG. 3D and FIG. 3B are the same).
In this embodiment of the present disclosure, the effective condition includes but is not limited to effective time. The effective time may be a time length from a current moment, display time of a current image that exceeds a preset time length, or the like.
In this embodiment of the present disclosure, effective time may be set for each image on which the operation has been performed (for example, FIG. 3A and FIG. 3B), and the image correspondingly displayed when the effective time is satisfied. Moreover, it is ensured that the relative position of the rule, overlaid on the image, relative to the reference object remains unchanged (FIG. 3C, FIG. 3D . . . ). In this way, different rules may be used in different conditions.
To better understand the technical solutions of the present disclosure, the present disclosure is further described using specific embodiments herein.
This embodiment of the present disclosure provides, based on a camera with a PTZ function, an image processing method. The camera may include a lens, a sensor, an encoding processor, a CPU, a motor control board, and a control motor. The control motor includes at least one of a left control motor, a right control motor, an upper control motor, or a lower control motor. The motor control board controls (or the CPU directly implements control) the control motor. Related coordinates are recorded and calculated by the motor control board (or the CPU), and fed back to the processor in time. The encoding processor may acquire a zoom ratio for controlling a lens, and may feed the zoom ratio back to the CPU. It needs to be pointed out that, the encoding processor shown in the flowchart of this embodiment of the present disclosure includes a lens, a sensor, and an encoding processor, that is, the encoding processor can perform zoom processing in an integrated manner.
An embodiment of the present disclosure provides a flowchart of an image processing method. Referring to FIG. 4, the method includes the following steps.
Step 21: A user sets a tripwire or geometric rule in a CPU using a client of a management server or a client of a digital camera.
Step 22: The CPU stores the corresponding rule to a memory, and sets the rule in an encoding processor.
Step 23: The encoding processor overlays the rule on an image, and returns a setting success message to the CPU.
In this embodiment of the present disclosure, the rule overlaid on the image is relative to a reference object. For each rule on the image, the image includes a reference object related to the rule.
Step 24: The user delivers, using the client of the management server or the client of the digital camera, an image operation instruction of performing rotation and zoom on the image.
It needs to be pointed out that, this embodiment is described using an image operation instruction of rotation and zoom as an example. Certainly, in this embodiment of the present disclosure, the instruction may also be only a rotation operation or a zoom operation.
In addition, it should be noted that before a camera initially starts, a coordinate system may be pre-established, that is, a two-dimensional coordinate is established using a sensor center as an origin of the two-dimensional coordinate system. Both a zoom ratio and an initial angle of the camera are recorded and determined, where the motor control board and the encoding processor are used respectively for notifying the CPU of the initial angle and the zoom ratio such that the CPU performs storing and processing.
Step 25: After receiving a corresponding command, the CPU notifies both the motor control board and the encoding processor.
Step 26: The motor control board controls a motor to rotate with a scale, calculates new coordinates of the rule on the image when controlling the motor to rotate with the scale, and feeds the new coordinates back to the CPU in real time.
In this embodiment of the present disclosure, the coordinates of the rule are relative to the pre-established two-dimensional coordinate. A manner of establishing the coordinate system includes but is not limited to using a sphere center point of a sphere formed by the rotation of the camera as the origin of the two-dimensional coordinate system.
Step 27: The encoding processor performs zoom processing, calculates a zoom ratio simultaneously, and feeds the zoom ratio back to the CPU in real time.
Step 28: After acquiring the coordinates of the rule and the zoom ratio, the CPU calculates, according to the coordinates and the zoom ratio, final coordinates of the rule after the operation, and notifies the encoding processor of the final coordinates of the rule.
Step 29: The encoding processor displays the rule on the image on which the operation has been performed such that a relative position of the rule relative to the reference object remains unchanged before and after the operation.
The following describes, in detail with reference to the accompanying drawings, an algorithm of enabling the relative position of the rule relative to the reference object to remain unchanged before and after the operation.
It needs to be pointed out that, in this embodiment of the present disclosure, the coordinates of the dome camera (or the camera) are a concept of relative coordinates, and may be an angle coordinate that is established using the sphere center point of the sphere formed by the rotation of the camera as the origin and using a direction of the dome camera as a reference. Coordinates of a reference object refer to two-dimensional coordinates established using a picture center as an origin.
In this embodiment of the present disclosure, a detailed algorithm of enabling the relative position of the rule (including a tripwire or a region) relative to the reference object for the rule to remain unchanged may be considered from two aspects, that is, a rotation operation case and a zoom operation case.
In this embodiment of the present disclosure, some particular reference points on the rule may be selected to replace the whole rule, and the coordinates of the rule can be determined by determining the coordinates of the reference points.
The following first describes the rotation operation case. In this embodiment of the present disclosure, an algorithm in the rotation operation case includes a detailed algorithm of moving a reference point to a picture center (the sensor center) and a detailed algorithm of calculating coordinates of the moved reference point after the dome camera is rotated.
The detailed algorithm of moving the reference point to the sensor center includes the following. Rotation of a camera module may be divided into two types, that is, a vertical rotation and a horizontal rotation. The vertical rotation may be processed first, and a method of processing the horizontal rotation is similar to the method of processing the vertical rotation, and the camera module is instructed to rotate by an angle when a corresponding angle by which the camera module needs to rotate is obtained.
A horizontal direction: as shown in an optical imaging diagram in FIG. 5, a reference point A′ is selected from a rule. Assuming that the reference point A′ actually corresponds to a real object A (that is, A′ is an image of the real object A on a sensor), a distance of the object A from a theoretical optical lens is L, and a distance by which the object A deviates from a central line of an optical system is D, the lens moves the point A to a sensor center position, and an angle by which the camera module needs to rotate horizontally is arctg(D/L).
To determine the angle by which the camera module rotates horizontally, D/L needs to be calculated first. It may be obtained that D/L=h/f according to a related mathematical theory, where h refers to a distance of an imaging point of the point A on the sensor from the center point of the sensor, and f refers to a focal length of the lens, and h=k1*W, where W represents a physical width (sensor_width) of the sensor, and k1 represents a ratio of a distance of the reference point from the picture center in a horizontal direction to a picture width.
Therefore, it may be calculated that the angle by which the camera module needs to rotate horizontally is arctg(D/L)=arctg(k1*W/f).
Similarly, it may be obtained that an angle by which the camera module needs to rotate in a vertical direction as arctg(k2*H/f), where H is a physical height of the sensor, and k2 represents a ratio of a distance of the reference point from the picture center in a vertical direction to a picture height.
After the angles by which the camera module needs to rotate in the horizontal and vertical directions are calculated, a position of the moved reference point after the dome camera is rotated is then calculated.
The detailed algorithm of calculating the position of the moved reference point after the dome camera is rotated includes calculating coordinates of the dome camera when the reference point is rotated to the sensor center, and calculating, according to the coordinates of the dome camera when the reference point is rotated to the sensor center, coordinates of the reference point on the sensor after the rotation.
The following describes the foregoing process one by one in detail.
A reference point is still selected from a region or a tripwire. Supposing that initial coordinates of the reference point that is before the rotation are (x1, y1), it can be known that PTZ coordinates of the dome camera are (p1, q1). The coordinates of the reference point need to be recalculated when the dome camera rotates to new coordinates (p2, q2), where p1 in the foregoing is a horizontal angle coordinate, q1 is a vertical angle coordinate, a focal length corresponding to the coordinates (p1, q1) is f1, and a focal length corresponding to (p2, q2) is f2.
(1) The coordinates of the dome camera when the reference point is rotated to the sensor center are calculated.
In this embodiment of the present disclosure, the reference point (x1, y1) may be rotated to the sensor center. For ease of description, supposing that the coordinates of the sensor center are (x0, y0), and when the reference point is rotated to the sensor center, supposing that the horizontal and vertical angle of the dome camera at this time is (p0, q0), it can be obtained from the foregoing detailed algorithm of moving the reference point to the sensor center p1−p0=arctg((x1−x0)/a total quantity of horizontal pixels on the picture*W/f1), and q1−q0=arctg((y1−y0)/a total quantity of vertical pixels on the picture*H/f1), where (x1−x0)/a total quantity of horizontal pixels on the picture is k1 mentioned above.
Because the focal length f1 corresponding to the coordinates (p1, q1) may be determined according to a zoom ratio, the coordinates (p0, q0) of the dome camera may be calculated.
(2) The coordinates of the reference point on the sensor after the rotation are calculated according to the coordinates of the dome camera when the reference point is rotated to the sensor center.
When the dome camera rotates to (p2, q2), supposing that a specific imaging position of the reference point on a sensor plate is (x2, y2), it may be obtained based on the foregoing formula (x2−x0)/a total quantity of horizontal pixels on the picture*W/f2=tg(p2−p0), and (y2−y0)/a total quantity of vertical pixels on the picture*H/f2=tg(q2−q0), where the focal length f2 corresponding to the coordinates (p2, q2) may be determined according to the zoom ratio.
(x2, y2), that is, the coordinate position of the reference point on the sensor after the rotation, may be calculated according to the foregoing formula.
The coordinates of the reference point after the operation may be directly determined using the foregoing algorithm when an image operation instruction is performing a rotation operation and a zoom operation.
When an image operation instruction is only performing a rotation operation, because zoom is not involved, f1=f2 in the foregoing formula in this case. Similarly, the coordinates of the reference point after the operation may be directly determined using the foregoing formula.
When an image operation instruction is only performing a zoom operation, only a corresponding multiple difference is multiplied when a vertex of a region or line is calculated. For example, it is assumed that the rule is a square region, the calculated (x2, y2) is (50, 50), an original multiple is 1, the length and the width are 10, and original coordinates of four vertices are (45, 45), (45, 55), (55, 45), and (55, 55), and a current multiple is 2. Then, current coordinates of the four vertices after calculation are (40, 40), (40, 60), (60, 40), and (60, 60). That is, both the length and the width are twice of the original ones.
By means of the foregoing detailed algorithm, the coordinates of the reference point after the operation can be determined, and further, the coordinates of the rule are determined and the rule is displayed on the image on which the operation has been performed such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
FIG. 6A is a structural block diagram of a camera according to an embodiment of the present disclosure. Referring to FIG. 6A, the camera 60 includes a CPU 61 and an encoding processor 62, where the CPU 61 is configured to receive an image operation instruction, where the image operation instruction includes performing at least one of a rotation operation or a zoom operation. The CPU 61 is further configured to perform an operation on an image according to the image operation instruction, where the image is overlaid with a rule which is a tripwire or a geometric region and includes a reference object for the rule, and the encoding processor 62 is configured to display the rule on the image on which the CPU 61 has performed the operation such that a relative position of the rule relative to the reference object remains unchanged before and after the operation.
In an embodiment, the CPU 61 is further configured to calculate coordinates of the rule in a pre-established coordinate system after the operation, and the encoding processor 62 is further configured to display, according to the coordinates of the rule after the operation, the rule on the image on which the operation has been performed such that the relative position of the rule relative to the reference object remains unchanged before and after the operation.
In an embodiment, the CPU 61 may be further configured to acquire an operation parameter, where the operation parameter includes at least one of a rotation angle or a zoom ratio, and calculate, according to the operation parameter, the coordinates of the rule in the pre-established coordinate system after the operation.
Optionally, in another embodiment, referring to FIG. 6B, the camera 60 further includes a motor control board 63, where when the image operation instruction is performing a rotation operation, the motor control board 63 is configured to calculate a rotation angle, and notify the CPU 61 of the calculated rotation angle, and the CPU 61 is further configured to calculate, according to the rotation angle notified by the motor control board 63, the coordinates of the rule in the pre-established coordinate system after the operation.
Optionally, the CPU 61 may be further configured to determine coordinates of a preselected reference point in the rule before an operation, calculate, according to the calculated rotation angle and the coordinates of the reference point before the operation, coordinates of a dome camera that are corresponding to a current picture when the reference point is rotated to a picture center, calculate, according to the coordinates of the dome camera that are corresponding to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation, and determine, according to the coordinates of the reference point after the operation, the coordinates of the rule after the operation.
Optionally, in another embodiment, the encoding processor 62 is further configured to calculate a zoom ratio, and the CPU 61 may be further configured to calculate, according to the zoom ratio calculated by the encoding processor 62, the coordinates of the rule after the operation.
Optionally, in another embodiment, the camera 60 further includes a motor control board 63, where when the image operation instruction is performing a rotation operation and a zoom operation, the motor control board 63 is configured to calculate a rotation angle. The encoding processor 62 is further configured to calculate a zoom ratio when the image operation instruction is performing a rotation operation and a zoom operation, and the CPU 61 is further configured to calculate, according to the rotation angle calculated by the motor control board 63 and the zoom ratio calculated by the encoding processor 62, the coordinates of the rule after the operation.
Optionally, the CPU 61 may be further configured to select a reference point in the rule and determine coordinates of the reference point before an operation, calculate, according to the rotation angle and the zoom ratio that are calculated and the coordinates of the reference point before the operation, coordinates of a dome camera that are corresponding to a current picture when the reference point is rotated to a picture center, calculate, according to the coordinates of the dome camera that are corresponding to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation, and determine, according to the coordinates of the reference point after the operation, the coordinates of the rule after the operation.
In an embodiment, optionally, the CPU 61 is further configured to set an effective condition for the image on which the operation has been performed, and redisplay the first image when the effective condition is satisfied.
It should be noted that, FIG. 6A and FIG. 6B only show some key components of the camera that are mainly involved in the present disclosure. Such display is to better highlight emphasis of the present disclosure, and does not represent that the camera 60 is only provided with the components shown in the figure.
To better understand the camera provided in this embodiment of the present disclosure, the following describes the hardware logical architecture of the camera in detail.
Referring to FIG. 7, the camera 70 may include a CPU 71, an encoding processor 72, a motor control board 73, and a motor 74. The CPU 71 receives an image operation instruction, and instructs, according to the image operation instruction, the encoding processor 72 and/or the motor control board 73 to perform corresponding operations. For example, when the image operation instruction is performing a rotation and zoom operation, the CPU 71 notifies the motor control board 73 of a rotation instruction, and notifies the encoding processor 72 of a zoom instruction. After receiving the instruction of the CPU 71, the motor control board 73 controls the motor 74 to rotate. The motor 74 may include at least one of a left control motor, a right control motor, an upper control motor, or a lower control motor, and is configured to control a camera lens to rotate such that a lens image rotates. After receiving the instruction of the CPU 71, the encoding processor 72 controls the lens to perform zoom processing.
In this embodiment of the present disclosure, the motor control board 73 controls (or the CPU 71 directly implements control) the motor 74. Related coordinates are recorded and calculated by the motor control board 73 (or the CPU 71), and fed back to the CPU 71 in real time. The encoding processor 72 may acquire a zoom ratio for controlling a lens, and may feed the zoom ratio back to the CPU 71. The CPU 71 is configured to calculate, according to the coordinates fed back by the motor control board 73 and the zoom ratio fed back by the encoding processor 72, coordinates of a rule after the operation, and instructs the encoding processor 72 to display, on the lens, the rule after the operation.
Certainly, it needs to be pointed out that, the camera in this embodiment of the present disclosure may also include other components, and the other components may be, for example, a lens and a sensor. The other components function regularly, and are not described herein.
It should be noted that, the camera provided in the foregoing embodiment and the embodiment of the image processing method belong to a same conception. For a specific implementation process, refer to the method embodiment, and details are not described herein again.
A person of ordinary skill in the art may understand that all or some of the steps of the embodiments may be implemented by hardware or a program instructing related hardware. The program may be stored in a computer-readable storage medium. The storage medium may include a read-only memory, a magnetic disk, an optical disc, or the like.
The foregoing descriptions are merely specific implementation manners of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present disclosure shall fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

What is claimed is:
1. An image processing method, implemented by a camera, wherein the image processing method comprises:
receiving, by the camera, an image operation instruction, wherein the image operation instruction comprises performing at least one of a rotation operation or a zoom operation;
performing, by the camera, an operation on an image according to the image operation instruction, wherein the image is overlaid with an intelligent analysis rule, and wherein the intelligent analysis rule comprises either a tripwire rule and a reference object for the tripwire rule or a geometric rule and a reference object for the geometric rule;
calculating, by the camera, coordinates of the intelligent analysis rule in a pre-established coordinate system after the operation; and
displaying, by the camera based upon the coordinates of the intelligent analysis rule in the pre-established coordinate system after the operation, the intelligent analysis rule on the image on which the operation has been performed such that a relative position of the intelligent analysis rule relative to the reference object remains unchanged before and after the operation,
wherein the image on which the operation has been performed is a first image,
wherein the image processing method further comprises:
setting, by the camera, an effective condition for the first image; and
redisplaying, by the camera, the first image when the effective condition is satisfied,
wherein calculating the coordinates of the intelligent analysis rule after the operation comprises:
calculating, by the camera, an operation parameter that comprises at least one of a rotation angle and a zoom ratio; and
calculating, by the camera according to the operation parameter, the coordinates of the intelligent analysis rule after the operation,
wherein the image operation instruction comprises performing the rotation operation,
wherein calculating the operation parameter comprises calculating the rotation angle, and
wherein calculating the coordinates of the intelligent analysis rule after the operation comprises calculating, by the camera according to the rotation angle, the coordinates of the intelligent analysis rule after the operation.
2. The image processing method of claim 1, wherein calculating the coordinates of the intelligent analysis rule after the operation comprises:
selecting, by the camera, a reference point in the intelligent analysis rule;
determining, by the camera, coordinates of the reference point before the operation;
calculating, by the camera according to the rotation angle and the coordinates of the reference point before the operation, coordinates of a dome camera that correspond to a current picture when the reference point is rotated to a picture center;
calculating, by the camera according to the coordinates of the dome camera that correspond to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation; and
determining, by the camera according to the coordinates of the reference point after the operation, the coordinates of the intelligent analysis rule after the operation.
3. The image processing method of claim 1, wherein the image operation instruction comprises performing the rotation operation and the zoom operation, wherein calculating the operation parameter comprises calculating the rotation angle and the zoom ratio, and wherein calculating, according to the operation parameter, the coordinates of the intelligent analysis rule after the operation comprises calculating, by the camera according to the rotation angle and the zoom ratio that are calculated, the coordinates of the intelligent analysis rule after the operation.
4. The image processing method of claim 3, wherein calculating the coordinates of the intelligent analysis rule after the operation comprises:
selecting, by the camera, a reference point in the intelligent analysis rule;
determining, by the camera, coordinates of the reference point before the operation;
calculating, by the camera according to the rotation angle and the zoom ratio that are calculated and the coordinates of the reference point before the operation, coordinates of a dome camera that correspond to a current picture when the reference point is rotated to a picture center;
calculating, by the camera according to the coordinates of the dome camera that correspond to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation; and
determining, by the camera according to the coordinates of the reference point after the operation, the coordinates of the intelligent analysis rule after the operation.
5. The image processing method of claim 1, further comprising displaying, by the camera, an image that is obtained after a subsequent operation is performed on the image when the effective condition is not satisfied and the subsequent operation is performed on the image.
6. The image processing method of claim 1, wherein the effective condition comprises effective time.
7. A camera, comprising:
a central processing unit configured to:
receive an image operation instruction, wherein the image operation instruction comprises performing at least one of a rotation operation and a zoom operation;
perform an operation on an image according to the image operation instruction, wherein the image is overlaid with an intelligent analysis rule, wherein the intelligent analysis rule comprises either a tripwire rule and a reference object for the tripwire rule or a geometric region rule and a reference object for the geometric region rule; and
calculate coordinates of the intelligent analysis rule in a pre-established coordinate system after the operation; and
an encoding processor coupled to the central processing unit and configured to display, based upon the coordinates of the intelligent analysis rule in the pre-established coordinate system after the operation, the intelligent analysis rule on the image on which the central processing unit has performed the operation such that a relative position of the intelligent analysis rule relative to the reference object remains unchanged before and after the operation,
wherein the image on which the operation has been performed is a first image,
wherein the central processing unit is further configured to:
set an effective condition for the image on which the operation has been performed;
redisplay the first image when the effective condition is satisfied;
acquire an operation parameter that comprises at least one of a rotation angle and a zoom ratio; and
calculate, according to the operation parameter, the coordinates of the intelligent analysis rule in the pre-established coordinate system after the operation,
wherein the camera further comprises a motor control board coupled to the central processing unit and when the image operation instruction comprises performing the rotation operation, the motor control board is configured to:
calculate the rotation angle; and
notify the central processing unit of the rotation angle, and
wherein the central processing unit is further configured to calculate, according to the rotation angle notified by the motor control board, the coordinates of the intelligent analysis rule in the pre-established coordinate system after the operation.
8. The camera of claim 7, wherein the central processing unit is further configured to:
determine coordinates of a preselected reference point in the intelligent analysis rule before the operation;
calculate, according to the rotation angle and the coordinates of the preselected reference point before the operation, coordinates of a dome camera that correspond to a current picture when the preselected reference point is rotated to a picture center;
calculate, according to the coordinates of the dome camera that correspond to the current picture when the preselected reference point is rotated to the picture center, coordinates of the preselected reference point after the operation; and
determine, according to the coordinates of the preselected reference point after the operation, the coordinates of the intelligent analysis rule after the operation.
9. The camera of claim 7, wherein when the image operation instruction is performing the rotation operation and the zoom operation, the motor control board is configured to calculate a rotation angle, and the encoding processor is further configured to calculate the zoom ratio, and the central processing unit is further configured to calculate, according to the rotation angle calculated by the motor control board and the zoom ratio calculated by the encoding processor, the coordinates of the intelligent analysis rule after the operation.
10. The camera of claim 9, wherein the central processing unit is further configured to:
select a reference point in the intelligent analysis rule;
determine coordinates of the reference point before the operation;
calculate, according to the rotation angle and the zoom ratio that are calculated and the coordinates of the reference point before the operation, coordinates of a dome camera that correspond to a current picture when the reference point is rotated to a picture center;
calculate, according to the coordinates of the dome camera that correspond to the current picture when the reference point is rotated to the picture center, coordinates of the reference point after the operation; and
determine, according to the coordinates of the reference point after the operation, the coordinates of the intelligent analysis rule after the operation.
11. An image processing method, implemented by a camera, wherein the image processing method comprises:
receiving, by the camera, an image operation instruction, wherein the image operation instruction comprises performing at least one of a rotation operation or a zoom operation;
performing, by the camera, an operation on an image according to the image operation instruction, wherein the image is overlaid with an intelligent analysis rule, and wherein the intelligent analysis rule comprises either a tripwire rule and a reference object for the tripwire rule or a geometric rule and a reference object for the geometric rule;
calculating, by the camera, coordinates of the intelligent analysis rule in a pre-established coordinate system after the operation; and
displaying, by the camera based upon the coordinates of the intelligent analysis rule in the pre-established coordinate system after the operation, the intelligent analysis rule on the image on which the operation has been performed such that a relative position of the intelligent analysis rule relative to the reference object remains unchanged before and after the operation,
wherein the image on which the operation has been performed is a first image,
wherein the image processing method further comprises:
setting, by the camera, an effective condition for the first image; and
redisplaying, by the camera, the first image when the effective condition is satisfied,
wherein calculating the coordinates of the intelligent analysis rule after the operation comprises:
calculating, by the camera, an operation parameter that comprises at least one of a rotation angle and a zoom ratio; and
calculating, by the camera according to the operation parameter, the coordinates of the intelligent analysis rule after the operation,
wherein the image operation instruction comprises performing the zoom operation,
wherein calculating the operation parameter comprises calculating the zoom ratio, and
wherein calculating, according to the operation parameter, the coordinates of the intelligent analysis rule after the operation comprises calculating, by the camera according to the zoom ratio, the coordinates of the intelligent analysis rule after the operation.
12. A camera, comprising:
a central processing unit configured to:
receive an image operation instruction, wherein the image operation instruction comprises performing at least one of a rotation operation and a zoom operation;
perform an operation on an image according to the image operation instruction, wherein the image is overlaid with an intelligent analysis rule, wherein the intelligent analysis rule comprises either a tripwire rule and a reference object for the tripwire rule or a geometric region rule and a reference object for the geometric region rule; and
calculate coordinates of the intelligent analysis rule in a pre-established coordinate system after the operation; and
an encoding processor coupled to the central processing unit and configured to display, based upon the coordinates of the intelligent analysis rule in the pre-established coordinate system after the operation, the intelligent analysis rule on the image on which the central processing unit has performed the operation such that a relative position of the intelligent analysis rule relative to the reference object remains unchanged before and after the operation,
wherein the image on which the operation has been performed is a first image,
wherein the central processing unit is further configured to:
set an effective condition for the image on which the operation has been performed; and
redisplay the first image when the effective condition is satisfied, wherein the central processing unit is further configured to:
acquire an operation parameter that comprises at least one of a rotation angle and a zoom ratio; and
calculate, according to the operation parameter, the coordinates of the intelligent analysis rule in the pre-established coordinate system after the operation,
wherein the encoding processor is further configured to calculate the zoom ratio, and
wherein the central processing unit is further configured to calculate, according to the zoom ratio calculated by the encoding processor, the coordinates of the intelligent analysis rule after the operation.
US15/392,636 2014-06-30 2016-12-28 Image processing method and camera Active 2036-01-15 US10425608B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201410305120.2A CN105450918B (en) 2014-06-30 2014-06-30 Image processing method and camera
CN201410305120 2014-06-30
CN201410305120.2 2014-06-30
PCT/CN2015/082535 WO2016000572A1 (en) 2014-06-30 2015-06-26 Image processing method and video camera

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/082535 Continuation WO2016000572A1 (en) 2014-06-30 2015-06-26 Image processing method and video camera

Publications (2)

Publication Number Publication Date
US20170111604A1 US20170111604A1 (en) 2017-04-20
US10425608B2 true US10425608B2 (en) 2019-09-24

Family

ID=55018445

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/392,636 Active 2036-01-15 US10425608B2 (en) 2014-06-30 2016-12-28 Image processing method and camera

Country Status (6)

Country Link
US (1) US10425608B2 (en)
EP (1) EP3148180B1 (en)
JP (1) JP6608920B2 (en)
KR (1) KR101932670B1 (en)
CN (1) CN105450918B (en)
WO (1) WO2016000572A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110719403A (en) * 2019-09-27 2020-01-21 北京小米移动软件有限公司 Image processing method, device and storage medium
CN111246097B (en) * 2020-01-19 2021-06-04 成都依能科技股份有限公司 PTZ scanning path generation method based on graph perception
CN113313634B (en) * 2020-02-26 2023-06-09 杭州海康威视数字技术股份有限公司 Monitoring image processing method, device, monitoring system and storage medium

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5363297A (en) 1992-06-05 1994-11-08 Larson Noble G Automated camera-based tracking system for sports contests
JPH0981758A (en) 1995-09-19 1997-03-28 Toshiba Corp Vehicle detecting device
US5729471A (en) 1995-03-31 1998-03-17 The Regents Of The University Of California Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
JP2004129131A (en) 2002-10-07 2004-04-22 Matsushita Electric Ind Co Ltd Monitor camera system
US6727938B1 (en) 1997-04-14 2004-04-27 Robert Bosch Gmbh Security system with maskable motion detection and camera with an adjustable field of view
EP1465115A2 (en) 2003-03-14 2004-10-06 British Broadcasting Corporation Method and apparatus for generating a desired view of a scene from a selected viewpoint
US20060139484A1 (en) 2004-12-03 2006-06-29 Seo Sung H Method for controlling privacy mask display
US20060244826A1 (en) * 2004-06-22 2006-11-02 Stratech Systems Limited Method and system for surveillance of vessels
US20070115355A1 (en) 2005-11-18 2007-05-24 Mccormack Kenneth Methods and apparatus for operating a pan tilt zoom camera
CN101072301A (en) 2006-05-12 2007-11-14 富士胶片株式会社 Method for displaying face detection frame, method for displaying character information, and image-taking device
CN101266710A (en) 2007-03-14 2008-09-17 中国科学院自动化研究所 An all-weather intelligent video analysis monitoring method based on a rule
JP2008283726A (en) 2008-08-25 2008-11-20 Sony Corp Monitoring apparatus, monitoring system and filter setting method
US20090244327A1 (en) 2008-03-26 2009-10-01 Masaaki Toguchi Camera system
US20090315712A1 (en) * 2006-06-30 2009-12-24 Ultrawave Design Holding B.V. Surveillance method and system using object based rule checking
US20100039548A1 (en) 2008-08-18 2010-02-18 Sony Corporation Image processing apparatus, image processing method, program and imaging apparatus
US20100245576A1 (en) * 2009-03-31 2010-09-30 Aisin Seiki Kabushiki Kaisha Calibrating apparatus for on-board camera of vehicle
WO2011031128A1 (en) 2009-09-08 2011-03-17 Mimos Berhad Control mechanism for automated surveillance system
CN102098499A (en) 2011-03-24 2011-06-15 杭州华三通信技术有限公司 Pan/ tilt/ zoom (PTZ) camera control method, device and system thereof
WO2011083547A1 (en) 2010-01-06 2011-07-14 Canon Kabushiki Kaisha Camera platform system
JP2011234172A (en) 2010-04-28 2011-11-17 Fujitsu Ltd Image processing device, image processing method, computer program for image processing and imaging device
US20120086780A1 (en) * 2010-10-12 2012-04-12 Vinay Sharma Utilizing Depth Information to Create 3D Tripwires in Video
CN103024276A (en) 2012-12-17 2013-04-03 沈阳聚德视频技术有限公司 Positioning and focusing method of pan-tilt camera
US20130178257A1 (en) * 2012-01-06 2013-07-11 Augaroo, Inc. System and method for interacting with virtual objects in augmented realities
CN103209290A (en) 2012-01-12 2013-07-17 鸿富锦精密工业(深圳)有限公司 Control system and method of PTZ (Pan Tilt Zoom) photographic device
JP2014011584A (en) 2012-06-28 2014-01-20 Canon Inc Information processor and control method for the same
CN103544806A (en) 2013-10-31 2014-01-29 江苏物联网研究发展中心 Important cargo transportation vehicle monitoring and prewarning system based on video tripwire rule

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6696945B1 (en) * 2001-10-09 2004-02-24 Diamondback Vision, Inc. Video tripwire
US6970083B2 (en) * 2001-10-09 2005-11-29 Objectvideo, Inc. Video tripwire

Patent Citations (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5363297A (en) 1992-06-05 1994-11-08 Larson Noble G Automated camera-based tracking system for sports contests
US5729471A (en) 1995-03-31 1998-03-17 The Regents Of The University Of California Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
JPH0981758A (en) 1995-09-19 1997-03-28 Toshiba Corp Vehicle detecting device
US6727938B1 (en) 1997-04-14 2004-04-27 Robert Bosch Gmbh Security system with maskable motion detection and camera with an adjustable field of view
JP2004129131A (en) 2002-10-07 2004-04-22 Matsushita Electric Ind Co Ltd Monitor camera system
EP1465115A2 (en) 2003-03-14 2004-10-06 British Broadcasting Corporation Method and apparatus for generating a desired view of a scene from a selected viewpoint
US20060244826A1 (en) * 2004-06-22 2006-11-02 Stratech Systems Limited Method and system for surveillance of vessels
US20060139484A1 (en) 2004-12-03 2006-06-29 Seo Sung H Method for controlling privacy mask display
US20070115355A1 (en) 2005-11-18 2007-05-24 Mccormack Kenneth Methods and apparatus for operating a pan tilt zoom camera
CN101072301A (en) 2006-05-12 2007-11-14 富士胶片株式会社 Method for displaying face detection frame, method for displaying character information, and image-taking device
US20070266312A1 (en) 2006-05-12 2007-11-15 Fujifilm Corporation Method for displaying face detection frame, method for displaying character information, and image-taking device
JP2007306416A (en) 2006-05-12 2007-11-22 Fujifilm Corp Method for displaying face detection frame, method for displaying character information, and imaging apparatus
US20090315712A1 (en) * 2006-06-30 2009-12-24 Ultrawave Design Holding B.V. Surveillance method and system using object based rule checking
CN101266710A (en) 2007-03-14 2008-09-17 中国科学院自动化研究所 An all-weather intelligent video analysis monitoring method based on a rule
US20090244327A1 (en) 2008-03-26 2009-10-01 Masaaki Toguchi Camera system
JP2009239499A (en) 2008-03-26 2009-10-15 Elmo Co Ltd Camera system
US20100039548A1 (en) 2008-08-18 2010-02-18 Sony Corporation Image processing apparatus, image processing method, program and imaging apparatus
JP2010045733A (en) 2008-08-18 2010-02-25 Sony Corp Image processor, image processing method, program, and image pickup device
JP2008283726A (en) 2008-08-25 2008-11-20 Sony Corp Monitoring apparatus, monitoring system and filter setting method
US20100245576A1 (en) * 2009-03-31 2010-09-30 Aisin Seiki Kabushiki Kaisha Calibrating apparatus for on-board camera of vehicle
WO2011031128A1 (en) 2009-09-08 2011-03-17 Mimos Berhad Control mechanism for automated surveillance system
WO2011083547A1 (en) 2010-01-06 2011-07-14 Canon Kabushiki Kaisha Camera platform system
JP2011234172A (en) 2010-04-28 2011-11-17 Fujitsu Ltd Image processing device, image processing method, computer program for image processing and imaging device
US20120086780A1 (en) * 2010-10-12 2012-04-12 Vinay Sharma Utilizing Depth Information to Create 3D Tripwires in Video
CN102098499A (en) 2011-03-24 2011-06-15 杭州华三通信技术有限公司 Pan/ tilt/ zoom (PTZ) camera control method, device and system thereof
US20130178257A1 (en) * 2012-01-06 2013-07-11 Augaroo, Inc. System and method for interacting with virtual objects in augmented realities
CN103209290A (en) 2012-01-12 2013-07-17 鸿富锦精密工业(深圳)有限公司 Control system and method of PTZ (Pan Tilt Zoom) photographic device
JP2014011584A (en) 2012-06-28 2014-01-20 Canon Inc Information processor and control method for the same
US20170039430A1 (en) 2012-06-28 2017-02-09 Canon Kabushiki Kaisha Setting apparatus and setting method
CN103024276A (en) 2012-12-17 2013-04-03 沈阳聚德视频技术有限公司 Positioning and focusing method of pan-tilt camera
CN103544806A (en) 2013-10-31 2014-01-29 江苏物联网研究发展中心 Important cargo transportation vehicle monitoring and prewarning system based on video tripwire rule

Non-Patent Citations (21)

* Cited by examiner, † Cited by third party
Title
Foreign Communication From a Counterpart Application, Chinese Application No. 201410305120.2, Chinese Office Action dated Aug. 1, 2018, 9 pages.
Foreign Communication From a Counterpart Application, Chinese Application No. 201410305120.2, Chinese Office Action dated Jan. 2, 2018, 9 pages.
Foreign Communication From a Counterpart Application, European Appication No. 15815512.7, Extended European Search Report dated Jun. 19, 2017, 8 pages.
Foreign Communication From a Counterpart Application, European Application No. 15815512.7, European Office Action dated Jun. 7, 2018, 8 pages.
Foreign Communication From a Counterpart Application, Japanese Application No. 2017-519769, English Translation of Japanese Office Action dated May 8, 2018, 5 pages.
Foreign Communication From a Counterpart Application, Japanese Application No. 2017-519769, Japanese Office Action dated May 8, 2018, 5 pages.
Foreign Communication From a Counterpart Application, Korean Application No. 10-2017-7001139, English Translation of Korean Office Action dated Mar. 9, 2018, 4 pages.
Foreign Communication From a Counterpart Application, Korean Application No. 10-2017-7001139, Korean Office Action dated Mar. 9, 2018, 6 pages.
Foreign Communication From a Counterpart Application, PCT Application No. PCT/CN2015/082535, English Translation of International Search Report dated Sep. 30, 2015, 2 pages.
Foreign Communication From a Counterpart Application, PCT Application No. PCT/CN2015/082535, English Translation of Written Opinion dated Sep. 30, 2015, 7 pages.
Haering, N., et al.,"The evolution of video surveillance: an overview," Machine Vision and Applications, vol. 19, No. 5-6, Springer, Apr. 30, 2008, 12 pages.
Machine Translation and Abstract of Chinese Publication No. CN101072301, Nov. 14, 2007, 27 pages.
Machine Translation and Abstract of Chinese Publication No. CN101266710, Sep. 17, 2008, 7 pages.
Machine Translation and Abstract of Chinese Publication No. CN102098499, Jun. 15, 2011, 23 pages.
Machine Translation and Abstract of Chinese Publication No. CN103024276, Apr. 3, 2013, 14 pages.
Machine Translation and Abstract of Chinese Publication No. CN103209290, Sep. 17, 2013, 9 pages.
Machine Translation and Abstract of Chinese Publication No. CN103544806, Jan. 29, 2014, 7 pages.
Machine Translation and Abstract of Japanese Publication No. JP2004129131, Apr. 22, 2004, 18 pages.
Machine Translation and Abstract of Japanese Publication No. JP2008283726, Nov. 20, 2008, 25 pages.
Machine Translation and Abstract of Japanese Publication No. JP2011234172, Nov. 17, 2011, 24 pages.
Machine Translation and Abstract of Japanese Publication No. JPH0981758, Mar. 28, 1997, 12 pages.

Also Published As

Publication number Publication date
JP6608920B2 (en) 2019-11-20
EP3148180A1 (en) 2017-03-29
JP2017520215A (en) 2017-07-20
WO2016000572A1 (en) 2016-01-07
KR101932670B1 (en) 2018-12-26
CN105450918A (en) 2016-03-30
EP3148180B1 (en) 2019-12-04
US20170111604A1 (en) 2017-04-20
KR20170020864A (en) 2017-02-24
EP3148180A4 (en) 2017-07-19
CN105450918B (en) 2019-12-24

Similar Documents

Publication Publication Date Title
RU2718413C2 (en) Information processing device, image forming device, control methods thereof and non-volatile computer-readable data storage medium
AU2017370476B2 (en) Virtual reality-based viewing method, device, and system
JP6786378B2 (en) Information processing equipment, information processing methods and programs
US8723951B2 (en) Interactive wide-angle video server
US20120056977A1 (en) Method for generating panoramic image
CN110602383B (en) Pose adjusting method and device for monitoring camera, terminal and storage medium
US10425608B2 (en) Image processing method and camera
JP2016127571A (en) Camera system, display control device, display control method, and program
CN110999307A (en) Display apparatus, server, and control method thereof
CN113286138A (en) Panoramic video display method and display equipment
WO2007060497A2 (en) Interactive wide-angle video server
CN114785961A (en) Patrol route generation method, device and medium based on holder camera
JP6543108B2 (en) INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM
US9438808B2 (en) Image capture control apparatus, method of limiting control range of image capture direction, and storage medium
US20170310891A1 (en) Image processing apparatus, image processing method and storage medium
US10257467B2 (en) Client device for displaying images of a controllable camera, method, computer program and monitoring system comprising said client device
US10770032B2 (en) Method and apparatus for processing image in virtual reality system
IL255870A (en) Method, device and installation for composing a video signal
EP4325425B1 (en) A method and system for defining an outline of a region in an image having distorted lines
KR102707798B1 (en) Camera device capable of pan-tilt-zoom operation and video surveillance system and method using the same
US12095964B2 (en) Information processing apparatus, information processing method, and storage medium
JP6949534B2 (en) Display device, image output device and display method
US10176615B2 (en) Image processing device, image processing method, and image processing program
JP2009147816A (en) Image signal processing apparatus, display device, image signal processing method, program and recording medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: HUAWEI TECHNOLOGIES CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHOU, BO;CAI, YONGJIN;XU, XILEI;SIGNING DATES FROM 20170516 TO 20170517;REEL/FRAME:043001/0355

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4