CN110177258A - Image processing method, device, server and storage medium - Google Patents

Image processing method, device, server and storage medium Download PDF

Info

Publication number
CN110177258A
CN110177258A CN201910580499.0A CN201910580499A CN110177258A CN 110177258 A CN110177258 A CN 110177258A CN 201910580499 A CN201910580499 A CN 201910580499A CN 110177258 A CN110177258 A CN 110177258A
Authority
CN
China
Prior art keywords
shooting
mobile object
camera
shooting area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910580499.0A
Other languages
Chinese (zh)
Inventor
杜鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910580499.0A priority Critical patent/CN110177258A/en
Publication of CN110177258A publication Critical patent/CN110177258A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position

Abstract

This application discloses a kind of image processing method, device, server and storage mediums, the image processing method is applied to server, server and multiple cameras communicate to connect, multiple cameras are distributed in different location, and the shooting area of two neighboring camera is adjacent in multiple cameras or presence partially overlaps.This method comprises: the camera for controlling the first shooting area in multiple cameras carries out image taking to mobile object, and receive the shooting image of the camera of the first shooting area;When determining that mobile object will leave current shooting region, mobile object is obtained by the second shooting area of entrance;It controls the corresponding camera of the second shooting area to be shot, and receives the shooting image of the camera of the second shooting area;According to the sequencing of the shooting time of shooting image, the shooting image received is subjected to splicing synthesis, obtains the corresponding video file of mobile object.Motion video when mobile object moves in successive range can be generated in this method.

Description

Image processing method, device, server and storage medium
Technical field
This application involves technique for taking fields, more particularly, to a kind of image processing method, device, server and deposit Storage media.
Background technique
Currently, being widely used in daily life with technique for taking, demand of the people to video capture is more and more. For example, more and more places start to arrange monitoring system, state and the personage in some region are monitored by using camera Activity etc..But since the shooting area of camera is limited, i.e., visible angle is limited, camera can only often fix some Region shot, therefore the content taken is limited.
Summary of the invention
In view of the above problems, it present applicant proposes image processing method, device, server and storage medium, may be implemented Obtain the motion video generated when mobile object moves in successive range.
In a first aspect, the embodiment of the present application provides a kind of image processing method, be applied to server, the server with Multiple camera communication connections, the multiple camera are distributed in different location, two neighboring camera shooting in the multiple camera The shooting area of head is adjacent or presence partially overlaps, which comprises controls the first shooting area in the multiple camera The camera in domain carries out image taking to mobile object, and receives the shooting image of the camera of first shooting area;With The shooting area that the mobile object is presently in is as current shooting region, when determining that it is described that the mobile object will be left When current shooting region, second shooting area of the mobile object by entrance is obtained, second shooting area is worked as with described Preceding shooting area is adjacent or presence partially overlaps;The corresponding camera of second shooting area is controlled to the mobile object It is shot, and receives the shooting image of the camera of second shooting area;According to the shooting of the shooting image received The shooting image received is carried out splicing synthesis, obtains the corresponding video of the mobile object by the sequencing of time File.
Second aspect, the embodiment of the present application provide a kind of picture processing unit, be applied to server, the server with Multiple camera communication connections, the multiple camera are distributed in different location, two neighboring camera shooting in the multiple camera The shooting area of head is adjacent or in the presence of partially overlapping, and described device includes: the first control module, region acquisition module, second Control module and Video Composition module, wherein first control module is for controlling first count in the multiple camera The camera for taking the photograph region carries out image taking to mobile object, and receives the shooting figure of the camera of first shooting area Picture;The region obtains module and is used for using the shooting area that the mobile object is presently in as current shooting region, when true When the current shooting region will be left by making the mobile object, the mobile object is obtained by the second of entrance and shoots area Domain, second shooting area partially overlap with the current shooting area adjacency or presence;Second control module is used The mobile object is shot in controlling second shooting area corresponding camera, and receives second shooting area The shooting image of the camera in domain;The Video Composition module is used for according to the successive of the shooting time for shooting image received Sequentially, the shooting image received is subjected to splicing synthesis, obtains the corresponding video file of the mobile object.
The third aspect, the embodiment of the present application provide a kind of server, comprising: one or more processors;Memory;One A or multiple application programs, wherein one or more of application programs are stored in the memory and are configured as by institute One or more processors execution is stated, one or more of programs are configured to carry out at the image that above-mentioned first aspect provides Reason method.
Fourth aspect, the embodiment of the present application provides a kind of computer-readable storage medium, described computer-readable Program code is stored in storage medium, said program code can be called the image for executing above-mentioned first aspect and providing by processor Processing method.
Scheme provided by the present application is applied to server, and server and multiple cameras communicate to connect, multiple cameras point It is distributed in different location, the shooting area of two neighboring camera is adjacent in multiple cameras or presence partially overlaps.Pass through control The camera for making the first shooting area in multiple cameras carries out image taking to mobile object, and receives the first shooting area The shooting image of camera, using the shooting area that mobile object is presently in as current shooting region, when determining mobile pair When as current shooting region will be left, mobile object is obtained by the second shooting area of entrance, second shooting area and current Shooting area is adjacent or presence partially overlaps, then controls the corresponding camera of the second shooting area and clap mobile object It takes the photograph, and receives the shooting image of the camera of the second shooting area, according to the successive of the shooting time for shooting image received Sequentially, the shooting image received is subjected to splicing synthesis, obtains the corresponding video file of mobile object, to realize to movement Object carries out splicing synthesis in the shooting image of multiple shooting areas, obtain mobile object be taken in multiple shooting areas it is complete Whole motion video individually checks the shooting image of each shooting area without user, prompts user experience.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 shows the schematic diagram of distributed system provided by the embodiments of the present application.
Fig. 2 shows the image processing method flow charts according to the application one embodiment.
Fig. 3 shows the image processing method flow chart according to another embodiment of the application.
Fig. 4 shows the image processing method flow chart according to another embodiment of the application.
Fig. 5 shows the image processing method flow chart according to the application further embodiment.
Fig. 6 shows a kind of block diagram of the image processing apparatus according to the application one embodiment.
Fig. 7 is the frame of the server for executing the image processing method according to the embodiment of the present application of the embodiment of the present application Figure.
Fig. 8 is the embodiment of the present application for saving or carrying the image processing method realized according to the embodiment of the present application Program code storage unit.
Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described.
With the development of society and the progress of science and technology, more and more places start to arrange monitoring system, to some area Domain is monitored, and in the most application scenarios being monitored by monitoring system, used camera often all only Some region fixed can be monitored.In this case, it can be only formed the video to single area monitoring, user is needing to look into See multiple regions monitoring video when, can only the monitor video respectively to each region check.To solve the above problems, city Panoramic video monitoring system has been risen on, overall view monitoring system shoots image using the camera for being set to different location, and These shooting images are synthesized into panoramic picture, but range acquired in panoramic picture is excessive, can not for some object into Row monitoring, user need the overall view monitoring video that checking monitoring content is more in the monitor video for needing to check some object, Cause user experience bad.
In view of the above-mentioned problems, inventor has found by long-term research and proposes at image provided by the embodiments of the present application Method, apparatus, server and storage medium are managed, is monitored shooting by multiple cameras for being distributed in different location, so The shooting image of multiple cameras is subjected to splicing synthesis afterwards, forms mobile object view corresponding to multiple shooting area activities Frequency file promotes monitoring effect, user is facilitated to check.Wherein, specific image processing method carries out in subsequent embodiment Detailed description.
It will be described below for the distributed system suitable for image processing method provided by the embodiments of the present application.
Referring to Fig. 1, Fig. 1 shows the schematic diagram of distributed system provided by the embodiments of the present application, wherein the distribution System includes server 100 and multiple cameras 200 (quantity of camera 200 shown in Fig. 1 is 4), wherein server 100 It is connected respectively with each camera 200 in multiple cameras 200, for carrying out data interaction with each camera 200 respectively, For example, server 100, which receives the image of the transmission of camera 200, server 100, sends instruction etc. to camera 200, do not do herein It is specific to limit.In addition, the server 100 can be Cloud Server, or traditional server, the camera 200 can be with For gun-type camera, hemisphere camera, high-definition intelligent spherical shape camera, pen holder type camera, veneer camera, flying saucer camera shooting Head, Mobile phone type camera etc., and the camera lens of the camera can use wide-angle lens, standard lens, telephoto lens, varifocal mirror Head, pin hole mirror head etc., do not do specific restriction herein.
In some embodiments, different positions is arranged in for shooting different regions in multiple cameras 200, and more The shooting area of every two adjacent camera 200 in a camera 200 is adjacent or partially overlaps.It is understood that every A camera 200 can correspond to according to the difference of its field angle and setting position and shoot different regions, by the way that every two is arranged The shooting area of adjacent camera 200 is adjacent or partially overlaps, and the region that distributed system can be made to be shot is covered entirely Lid.Wherein, multiple cameras 200 can be spaced along a length direction is arranged side by side, for shooting the length direction region Image, multiple cameras 200 can also be spaced setting along a circumferential direction, for shooting the figure in the annular region Picture, certainly, multiple cameras 200 can also include other set-up modes, herein not as restriction.
Referring to Fig. 2, the flow diagram of the image processing method provided Fig. 2 shows the application one embodiment.Institute Image processing method is stated for being monitored shooting by multiple cameras for being distributed in different location, then by multiple cameras Shooting image carry out splicing synthesis, form mobile object video file corresponding to multiple shooting areas activities, promote prison Effect is controlled, user is facilitated to check.In the particular embodiment, described image processing method is applied at image as shown in FIG. 6 Manage device 400 and the server 100 (Fig. 7) configured with described image processing unit 400.It will be said by taking server as an example below The detailed process of bright the present embodiment, it will of course be understood that, server applied by the present embodiment can be Cloud Server, It can be traditional server, it is not limited here.It should be communicated to connect with multiple cameras, the multiple camera is distributed in difference Position, the shooting area of two neighboring camera is adjacent in the multiple camera or exists and partially overlaps.It will be directed to below Process shown in Fig. 2 is explained in detail, and shown image processing method can specifically include following steps:
Step S110: the camera for controlling the first shooting area in the multiple camera carries out image bat to mobile object It takes the photograph, and receives the shooting image of the camera of first shooting area.
In the embodiment of the present application, multiple cameras can correspond to multiple shooting areas, multiple cameras and multiple shootings Region corresponds.When there are when mobile object, can control the first shooting area in the first shooting area in multiple shooting areas Camera corresponding to domain carries out image taking to mobile object.Wherein, mobile object can be with personage, animal, vehicle etc. Mobile object.
In some embodiments, the detection device for detecting mobile object can be set in each shooting area, each The detection device for detecting mobile object of shooting area setting is connect with server.Wherein, the detection of mobile object is detected Device can be human body detection device, or the detection device based on motion sensor, infrared sensor.When multiple shootings When the detection device of the first shooting area detects mobile object in region, then it can control the corresponding camera shooting of the first shooting area Head carries out image taking to mobile object.Server can be by sending open command to camera, with the first shooting of control area The corresponding camera in domain is opened, and carries out image taking to the first shooting area, to take the bat including the mobile object Take the photograph image.Certainly, can be not as restriction specifically for detecting the detection device of mobile object, for example, it can be based on red The detection device of outer camera.
In some embodiments, the first shooting area can be used as preset shooting area, and the first shooting The corresponding camera in region can be constantly in open state, and the corresponding camera of the first shooting area can will be shot on image It passes in server, server can shoot image based on the received and determine that the first shooting area whether there is mobile object, and work as There are when mobile object, receive camera persistently to the shooting image of the mobile object for first shooting area.As a kind of mode, The content in shooting image can be identified according to the characteristic information of mobile object, whether to be deposited in identification shooting image In mobile object, it is not limited thereto.
Step S120: using the shooting area that the mobile object is presently in as current shooting region, when determining When the current shooting region will be left by stating mobile object, second shooting area of the mobile object by entrance is obtained, it is described Second shooting area partially overlaps with the current shooting area adjacency or presence.
In the embodiment of the present application, server can persistently track the shifting of mobile object according to the shooting image received It is dynamic, and according to the mobile variation of the mobile object, other cameras controlled in multiple cameras shoot mobile object.
In some embodiments, server can be using shooting area that mobile object is presently in as current shooting area Domain, for example, then current shooting region is the first shooting area when mobile object is currently at the first shooting area.
Server can determine the mobile variation of mobile object, and determine whether the mobile object will leave current shooting area Domain.For example, server can be analyzed according to the shooting image of the camera in current shooting region and obtain the movement change of mobile object Change, and determines whether the mobile object will leave current shooting region according to the motion change of the mobile object.
In some embodiments, server can also according to the shooting image of the corresponding camera of the first shooting area, It determines and shoots the edge whether mobile object in image is in the first shooting area, if the mobile object is in the first shooting area The edge in domain, it is determined that mobile object will leave the first shooting area.
In some embodiments, server, can be with root when determining that mobile object will leave current shooting region According to the mobile variation of mobile object, the second shooting area that mobile object will enter is determined.Due to adjacent in multiple cameras The shooting area of two cameras is adjacent or exists and partially overlaps, thus determine that mobile object by the second shooting of entrance Region partially overlaps with current shooting area adjacency or presence.
In the embodiment of the present application, only opening state can be in by the corresponding camera in current shooting region in multiple cameras State, other cameras of multiple cameras in addition to the corresponding camera in current shooting region are in close state, and server can With it is subsequent according to mobile object by the second shooting area of entrance, and control the corresponding camera of the second shooting area and open, and Mobile object is shot.
Step S130: the corresponding camera of control second shooting area shoots the mobile object, and connects Receive the shooting image of the camera of second shooting area.
Server then can control the second shooting area pair after determining mobile object by the second shooting area of entrance The camera answered is opened, and carries out image taking to the mobile object, obtains shooting image, and the second shooting area is corresponding Shooting image to the mobile object can be sent to server by camera, so that server can receive the mobile object Shooting image in the second shooting area.Since the camera of the second shooting area is before shooting the mobile object It is in close state, so that the power consumption of camera can be saved during the movement to mobile object is monitored.
In some embodiments, mobile object shooting locating before being moved to the second shooting area can also be controlled The corresponding camera in region is closed, to save the power consumption of camera.
In the embodiment of the present application, server can repeat step S120 and step S130, so that Server gets the shooting image in all shooting areas that mobile object is moved to.
Step S140: according to the sequencing of the shooting time of the shooting image received, by the shooting received Image carries out splicing synthesis, obtains the corresponding video file of the mobile object.
In the embodiment of the present application, all shooting images received can be carried out splicing synthesis by server, to obtain The corresponding video file of the mobile object.Wherein, all shooting images received shoot mobile object to be above-mentioned All cameras transmitted by shooting image.
In some embodiments, server can be according to the shooting time sequencing of shooting image, by what is received All shooting images carry out splicing synthesis, obtain the corresponding video file of the mobile object.
In some embodiments, server can obtain shooting image in the file information according to the shooting image of storage Shooting time.Wherein, camera can be retouched when uploading shooting image using shooting time as one of them of shooting image State information and be sent to server, thus server receive shooting image when, also it is available shooting image shooting time. Certainly, the mode that server obtains the shooting time of shooting image can be not as restriction, and for example, it can be servers from taking the photograph As head searches the shooting time of shooting image.
In some embodiments, server carries out the sequence after arriving first to the shooting time of all shooting images, obtains To the sequencing of shooting time, further according to sequencing, splicing synthesis is carried out to all shooting images, obtains the mobile object Video file.That is, every frame image in all shooting image construction video files, and every frame image is in video text Sequencing in part is identical as the sequencing of shooting time.
In some embodiments, which can also be sent to electronic equipment or third-party platform by server (such as APP, webpage mailbox), to facilitate user to check and download, so that user can be directly viewable about to mobile object The video that is constituted of shooting image.
In addition, the time is needed since mobile object is moved to another shooting area from a shooting area, it is different Camera can successively take the mobile object, cause the shooting time for shooting image to have sequencing, thus server root According to the sequencing of shooting time, splicing can reflect the shooting that the mobile object is constituted in multiple cameras in the video of synthesis Action trail in region.And since the shooting area of two neighboring camera in multiple cameras is adjacent or there are parts It is overlapped, so that the shooting area that multiple cameras are constituted is a complete region, therefore the video file for splicing synthesis can reflect Activity change of the mobile object in a biggish continuum.
Image processing method provided by the embodiments of the present application, by monitoring the mobile variation of mobile object, control movement pair Camera as entering shooting area shoots the mobile object, then by the shooting image of the camera received into Row splicing synthesis forms mobile object video file corresponding to multiple shooting area activities, promotes monitoring effect, facilitate use Family is checked, user experience is promoted.
Referring to Fig. 3, Fig. 3 shows the flow diagram of the image processing method of another embodiment of the application offer. This method is applied to above-mentioned server, and the server and multiple cameras communicate to connect, and the multiple camera is distributed in difference Position, the shooting area of two neighboring camera is adjacent in the multiple camera or exists and partially overlaps.It will be directed to below Process shown in Fig. 3 is explained in detail, and shown image processing method can specifically include following steps:
Step S210: the first shooting area of detection whether there is mobile object.
In the embodiment of the present application, the first shooting area can be used as preset shooting area, when the shooting area There are that when mobile object, then can trigger to carry out mobile monitoring to the mobile object, and the bat that the mobile object enters is controlled The camera for taking the photograph region carries out image taking.
In some embodiments, mobile object can be personage, and the first shooting area can be set and the service The human body detection device of device communication connection.Wherein, human body detection device can be human-body infrared sensing detection device etc. can be with The device of human body is detected, it is not limited here.In this embodiment, server detects first shooting area and whether there is Mobile object may include: the detection data for receiving the human body detection device;According to the detection data, described is determined One shooting area whether there is mobile object.It should be understood that human body detection device can there are personages in the first shooting area When, detection data when human body is detected the presence of, thus the detection data that server can be sent according to human body detection device, really A shooting area is made with the presence or absence of personage.
Certainly, specific service device detect the first shooting area with the presence or absence of mobile object mode can not as restriction, For example, it is also possible to by the image for obtaining the first shooting area in real time, and according to the characteristic information of mobile object, determine first count It takes the photograph with the presence or absence of the content with this feature information matches in the image in region, if existed and this in the image of the first shooting area The matched content of characteristic information, it is determined that there are the mobile objects for the first shooting area, if in the image of the first shooting area There is no the contents with this feature information matches, it is determined that the mobile object is not present in the first shooting area.
Step S220: if there are mobile objects for first shooting area, first count in the multiple camera is controlled The camera for taking the photograph region carries out image taking to mobile object, and receives the shooting figure of the camera of first shooting area Picture.
Step S230: using the shooting area that the mobile object is presently in as current shooting region, when determining When the current shooting region will be left by stating mobile object, second shooting area of the mobile object by entrance is obtained, it is described Second shooting area partially overlaps with the current shooting area adjacency or presence.
Step S240: the corresponding camera of control second shooting area shoots the mobile object, and connects Receive the shooting image of the camera of second shooting area.
In the embodiment of the present application, step S220- step S240 can be no longer superfluous herein refering to the content of previous embodiment It states.
Step S250: using the shooting area that the mobile object is presently in as current shooting region described in repeating, when When determining that the mobile object will leave the current shooting region, the mobile object is obtained by the second of entrance and shoots area The step of domain, until the corresponding camera of control second shooting area shoots the mobile object, and receives The step of shooting image of the camera of second shooting area, until the mobile object is not present in the multiple camera shooting In the shooting area of head.
In the embodiment of the present application, server can constantly repeat step S220- step S240, and in mobile object When being not present in the shooting area of multiple cameras, terminate the repetitive process, so that server gets mobile object movement To all shooting areas in shooting image.
In some embodiments, first shooting area that above-mentioned first shooting area can enter for mobile object, It controls the corresponding camera of first shooting area to shoot the mobile object, then will leave first in the mobile object When a shooting area, second shooting area of the mobile object by entrance is obtained, controls that second shooting area is corresponding to be taken the photograph As head shoots the mobile object, then when the mobile object will leave second shooting area, obtain the mobile object By the third shooting area of entrance, controls the 4th shooting area corresponding camera and the mobile object is shot, such as This is repeated, until the mobile object leaves whole shooting areas corresponding to multiple cameras, that is, the mobile object is not deposited It is any shooting area.
Step S260: according to the sequencing of the shooting time of the shooting image received, the shooting image that will be received Splicing synthesis is carried out, the corresponding video file of the mobile object is obtained.
In the embodiment of the present application, the mode that server carries out splicing synthesis to all shooting images received can join The content of previous embodiment is read, details are not described herein.
In some embodiments, shooting figure of the server in the camera shooting overhead pass shot to all pairs of mobile objects As after, before all shooting images received carry out splicing synthesis, all shooting images received can also be carried out Screening.
As an implementation, whether it includes two neighboring that server may determine that in all shooting images received Camera is in the same time to the shooting image of the mobile object;If including two neighboring in all shooting images received Camera to the shooting image of the mobile object, then it is same at this to obtain the first camera in two neighboring camera in the same time The first object image that one time took, and obtain second camera in two neighboring camera and shot in the same time The second target image arrived;The image quality parameter of default mobile object is screened from first object image and the second target image most Excellent image.
In this embodiment, the region of two neighboring camera shooting partially overlaps, therefore, when the mobile object is in place When the lap in the region that the two neighboring camera is shot, which may clap simultaneously in the same time Take the photograph the mobile object, thus will include that two same times take in the shooting image that obtains of server including the movement Object.Therefore, the first camera (one of camera) takes in the same time in available two neighboring camera Target image (being denoted as first object image) and the two neighboring camera in second camera (another camera) In the target image (being denoted as the second target image) that the same time takes.
Further, server can obtain the movement after obtaining first object image and the second target image respectively Image quality parameter (be denoted as first image quality parameter) and the mobile object of the object in first object image are in the second target image In image quality parameter (being denoted as the second image quality parameter).Wherein, image quality parameter may include that clarity, brightness, acutance, camera lens are abnormal Change, color, resolution, gamut range, purity etc., it is not limited here.
Server after obtaining the first image quality parameter and the second image quality parameter, then can compare the first image quality parameter with And second the corresponding image quality effect of image quality parameter obtained from the first image quality parameter and the second image quality parameter according to comparison result Optimal image quality parameter is taken, then filters out figure corresponding to optimal image quality parameter from first object image and the second target image Picture.For example, it is clear highest can be obtained from first object image and the second target image when image quality parameter includes clarity The corresponding image of degree.
Server, then can be to progress image after carrying out optical sieving as above to all shooting images received All shooting images after screening carry out splicing synthesis, obtain the corresponding video file of the mobile object.
In some embodiments, which can be applied to monitoring special area (such as confidential areas) Scene, server then can be generated to obtain mobile object in special area appearance when determining that special area has mobile object Action trail video afterwards facilitates user to understand all behaviors for swarming into the user of the special area.
In some embodiments, which also can be applied to the scene of monitoring mobile object.Server The corresponding video file of obtained mobile object can be used as the corresponding monitor video of mobile object, and server can incite somebody to action The monitor video is sent to monitor terminal corresponding with the mobile object, so that the corresponding user's timely learning of the monitor terminal is moved The activity condition of dynamic object.
Further, when the scene for monitoring mobile object is Household monitor scene, the first shooting area as doorway region, Behavior so as to detect that personage goes home, and after going home to personage is monitored.When mobile object is old man or child When, then it is monitored after can going home to old man or child, and the monitor terminal can correspond to the monitoring of old man or child People, the situation after being gone home so as to guardian with the old man or child of timely learning family, avoids the generation of fortuitous event.
Further, server can also automatically carry out the monitor video after the monitor video for obtaining mobile object Whether analysis, there are abnormal conditions with the mobile object judged in the monitor video, go out when judging result characterizes the mobile object When existing abnormal conditions, warning message can be sent to monitor terminal corresponding with the mobile object, so that monitor terminal is corresponding User makes corresponding processing in time.Wherein, abnormal conditions may include falling down, can not crouching long, cryying, falling ill, herein not It limits.In addition, server when sending warning message to monitor terminal, can also send the monitor video or the exception feelings The corresponding video clip of condition is to monitor terminal, so that the corresponding user of monitor terminal understands the true feelings of the mobile object in time Condition.
In some embodiments, monitor terminal can be based on the prison after the monitor video for receiving server transmission It controls video and sends target instruction target word information to server, correspondingly, server receives and responds the target after target instruction target word information and refer to Information is enabled to execute corresponding operation, for example, dial the police emergency number, dial ambulance call etc..
Image processing method provided by the embodiments of the present application, by the way that there are mobile objects determining the first shooting area When, the corresponding camera of the first shooting area of control is shot, the subsequent shooting area being moved to further according to the mobile object, The corresponding camera of shooting area that the mobile object is moved to is controlled successively to shoot the mobile object, then will be owned Splicing synthesis is carried out to the shooting image for the camera that mobile object is shot, forms mobile object in multiple shooting areas Video file corresponding to activity promotes monitoring effect, user is facilitated to check, promotes user experience.
Referring to Fig. 4, Fig. 4 shows the flow diagram of the image processing method of another embodiment of the application offer. This method is applied to above-mentioned server, and the server and multiple cameras communicate to connect, and the multiple camera is distributed in difference Position, the shooting area of two neighboring camera is adjacent in the multiple camera or exists and partially overlaps.It will be directed to below Process shown in Fig. 4 is explained in detail, and shown image processing method can specifically include following steps:
Step S310: the camera for controlling the first shooting area in the multiple camera carries out image bat to mobile object It takes the photograph, and receives the shooting image of the camera of first shooting area.
In the embodiment of the present application, step S310 can be refering to the content of previous embodiment, and details are not described herein.
Step S320: using the shooting area that the mobile object is presently in as current shooting region, worked as according to described The shooting image of the camera of preceding shooting area determines that the mobile data of the mobile object, the mobile data include at least Moving direction and movement speed.
In some embodiments, server is when determining whether the mobile object will leave current shooting region, can be with According to the camera in current shooting region to the shooting image of the mobile object shoot, the movement of the mobile object is determined Data, to determine whether the mobile object will leave current shooting region according to mobile data.
As an implementation, server can be according to the shooting figure of multiple corresponding cameras in current shooting region Picture identifies the position of the mobile object in every shooting image in multiple shooting images, is then somebody's turn to do according in every shooting image The shooting time of the position of mobile object and every shooting image, calculates the movement speed and moving direction of mobile object. Wherein, the shooting time of the position of the mobile object and every shooting image in image is shot according to every, calculates the movement The movement speed of object and the mode of moving direction are referred to kinematical theory in physics and are calculated, and do not limit herein It is fixed.
Further, when identifying that multiple shoot the position of the mobile object in every shooting image in images, Ke Yili With the characteristic information of default mobile object, the picture material in every shooting image with this feature information matches is determined, then really Position of the picture material in shooting image is determined, so that it is determined that going out position of the mobile object in shooting image.For example, mobile When object is personage, it can be identified first when the personage first appears when the personage first appears in the first shooting area The characteristic information (such as face characteristic etc.) of the personage in the shooting image of the corresponding camera shooting of shooting area.
Step S330: it is based on the mobile data, determines whether the mobile object will leave the current shooting region.
In some embodiments, server can determine the shifting according to the movement speed and moving direction of mobile object Object is moved in the position of different time being moved to, may thereby determine that whether the mobile object will leave current shooting region. Wherein, server can be calculated when the mobile object can go out according to the movement speed and moving direction of mobile object The marginal position in present current shooting region, so that the mobile object will leave current shooting region when determining the moment, This is without limitation.
In some embodiments, how to determine whether mobile object will be from provided by step S320 and step S330 It opens current region and how to determine that the mode for the second shooting area that will be moved to also can be applied in previous embodiment.
Step S340: when determining that the mobile object will leave the current shooting region, it is based on the movement side To determining the mobile object by the second shooting area of entrance from the corresponding shooting area of the multiple camera.
For example, shooting area 1, shooting area 2 and shooting area 3 are successively arranged by first direction, if current shooting Region is shooting area 2, and when moving direction is first direction, then the second shooting area of entrance is shooting area by mobile object 3, if current shooting region is shooting area 2, when moving direction is the second direction opposite with first direction, then mobile object It is shooting area 1 by the second shooting area of entrance.Certainly, the above is only citings.
Step S350: according to the sequencing of the shooting time of the shooting image received, the shooting image that will be received Splicing synthesis is carried out, the corresponding video file of the mobile object is obtained.
In the embodiment of the present application, step S350 can be refering to the content of previous embodiment, and details are not described herein.
Image processing method provided by the embodiments of the present application, by the way that there are mobile objects determining the first shooting area When, the corresponding camera of the first shooting area of control is shot, subsequent further according to the shooting image for passing through current shooting region It determines the mobile data of mobile terminal, and the shooting area that the mobile object is moved to is determined according to mobile data, control successively The corresponding camera of the shooting area that the mobile object is moved to shoots the mobile object, then by all pairs mobile pair As the shooting image of the camera shot carries out splicing synthesis, it is right in multiple shooting areas activities institute to form mobile object The video file answered promotes monitoring effect, user is facilitated to check, promotes user experience.
Referring to Fig. 5, Fig. 5 shows the flow diagram of the image processing method of the application further embodiment offer. This method is applied to above-mentioned server, and the server and multiple cameras communicate to connect, and the multiple camera is distributed in difference Position, the shooting area of two neighboring camera is adjacent in the multiple camera or exists and partially overlaps.It will be directed to below Process shown in fig. 5 is explained in detail, and shown image processing method can specifically include following steps:
Step S410: the camera for controlling the first shooting area in the multiple camera carries out image bat to mobile object It takes the photograph, and receives the shooting image of the camera of first shooting area.
Step S420: using the shooting area that the mobile object is presently in as current shooting region, when determining When the current shooting region will be left by stating mobile object, second shooting area of the mobile object by entrance is obtained, it is described Second shooting area partially overlaps with the current shooting area adjacency or presence.
Step S430: the corresponding camera of control second shooting area shoots the mobile object, and connects Receive the shooting image of the camera of second shooting area.
In the embodiment of the present application, step S410 to step S430 can be refering to the content of previous embodiment, herein no longer It repeats.
Step S440: the shooting figure received described in judgement seems no to meet Video Composition condition.
In some embodiments, server, can be with when all shooting images received are carried out splicing synthesis Judge all shooting figures for receiving seem it is no meet Video Composition condition, to determine whether to all shooting images received Carry out splicing synthesis.Wherein, it includes the multiple camera that Video Composition condition, which may include: the shooting image received, In at least there are two adjacent cameras to the shooting image of the mobile object, and/or, the shooting image received The quantity of middle shooting image is greater than specified threshold.
In some embodiments, splicing synthesis is being carried out to all shooting images received, is obtaining mobile object pair When the video file answered, it usually needs obtain motion video of the mobile object in a continuous regional scope, and every The position of a camera distribution is different, and the shooting area of two neighboring camera is adjacent or presence partially overlaps, i.e., and adjacent two The shooting area that a camera is constituted is a continuous shooting area, therefore all shooting images received meet video It may include at least the presence of two adjacent cameras in multiple cameras in all shooting images for receiving when synthesis condition To the shooting image of the mobile object, it can make at least have one continuously in the video file of subsequent splicing synthesis in this way Video file in regional scope.
In some embodiments, splicing synthesis is being carried out to all shooting images received, it is also desirable to a large amount of to clap The video file that a playing duration is greater than certain time length could be constituted by taking the photograph image, therefore meet the shooting figure of Video Composition condition The quantity of picture can be greater than specified threshold, and the specific value of the specified threshold can be not as restriction, can video according to demand The playing duration of file and set.
Step S450: if meeting the Video Composition condition, when executing the shooting according to the shooting image received Between sequencing, the shooting image received is subjected to splicing synthesis, obtains the mobile object corresponding video text The step of part.
In the embodiment of the present application, if all shooting images received meet Video Composition condition, according to shooting The sequencing of the shooting time of image, the mode that all shooting images received are carried out splicing synthesis can be refering to aforementioned Content in embodiment, details are not described herein.If all shooting images received are unsatisfactory for Video Composition condition, no Splicing synthesis is carried out to the shooting image received.
Image processing method provided by the embodiments of the present application, by the way that there are mobile objects determining the first shooting area When, the corresponding camera of the first shooting area of control is shot, the subsequent shooting area being moved to further according to the mobile object, The corresponding camera of shooting area that the mobile object is moved to is controlled successively to shoot the mobile object, then judges institute The shooting figure for having the camera shot to mobile object seem it is no meet Video Composition condition, if meeting Video Composition Condition can form mobile object video file corresponding to multiple shooting area activities, be promoted just by splicing synthesis is carried out Monitoring effect facilitates user to check, promotes user experience.
Referring to Fig. 6, it illustrates a kind of structural block diagram of image processing apparatus 400 provided by the embodiments of the present application, it should Image processing apparatus 400 is applied to above-mentioned server, and the server and multiple cameras communicate to connect, the multiple camera point It is distributed in different location, the shooting area of two neighboring camera is adjacent in the multiple camera or presence partially overlaps.It should Image processing apparatus 400 includes: that the first control module 410, region acquisition module 420, the second control module 430 and video close At module 440.Wherein, first control module 410 is used to control the camera shooting of the first shooting area in the multiple camera Head carries out image taking to mobile object, and receives the shooting image of the camera of first shooting area;It obtains in the region Modulus block 420 is used for using the shooting area that the mobile object is presently in as current shooting region, when determining the shifting When dynamic object will leave the current shooting region, second shooting area of the mobile object by entrance is obtained, described second Shooting area partially overlaps with the current shooting area adjacency or presence;Second control module 430 is for controlling institute It states the corresponding camera of the second shooting area to shoot the mobile object, and receives the camera shooting of second shooting area The shooting image of head;The Video Composition module 440 is used for the sequencing of the shooting time according to the shooting image received, The shooting image received is subjected to splicing synthesis, obtains the corresponding video file of the mobile object.
In some embodiments, which can also include: obj ect detection module.Obj ect detection module is used Before the camera of the first shooting area in the multiple camera of control carries out image taking to mobile object, inspection First shooting area is surveyed with the presence or absence of mobile object.If first shooting area is there are mobile object, described The camera that one control module 410 controls the first shooting area in the multiple camera carries out image taking to mobile object.
As an implementation, first shooting area is provided with the human testing connecting with the server communication Device.In this embodiment, obj ect detection module can be specifically used for: receive the detection data of the human body detection device; According to the detection data, determine first shooting area with the presence or absence of mobile object.
In some embodiments, which can also include: that mobile data obtains module and movement Detection module.Mobile data obtains module and is used to determine that the mobile object will leave the current shooting region described When, the mobile object is obtained by before the second shooting area of entrance, according to the bat of the camera in the current shooting region Image is taken the photograph, determines that the mobile data of the mobile object, the mobile data include at least moving direction and movement speed;It moves Dynamic detection module is used to be based on the mobile data, determines whether the mobile object will leave the current shooting region.
Further, region obtains module 420 and can be specifically used for: when determining that the mobile object will leave described work as When preceding shooting area, it is based on the moving direction, the movement pair is determined from the corresponding shooting area of the multiple camera As the second shooting area that will enter.
In some embodiments, which can also include: synthesis condition judgment module.Synthesis condition is sentenced Disconnected module is used for the sequencing in the shooting time according to the shooting image received, by the shooting image received into Row splicing synthesis, before obtaining the corresponding video file of the mobile object, the shooting figure received described in judgement seems no full Sufficient Video Composition condition.If the shooting image received meets the Video Composition condition, Video Composition module 440 According to the sequencing of the shooting time of the shooting image received, the shooting image received is subjected to splicing synthesis, Obtain the corresponding video file of the mobile object.
Further, it includes the multiple camera that the Video Composition condition, which includes: the shooting image received, In at least there are two adjacent cameras to the shooting image of the mobile object, and/or, the shooting image received The quantity of middle shooting image is greater than specified threshold.
In some embodiments, which can also include: to repeat module.Repeat module use It is described using the shooting area that the mobile object is presently in as current shooting region in repeating, when determining the movement pair When as the current shooting region will be left, the step of obtaining the second shooting area of the mobile object by entrance, until described It controls the corresponding camera of second shooting area to shoot the mobile object, and receives second shooting area Camera shooting image the step of, until the mobile object is not present in the shooting area of the multiple camera.
It is apparent to those skilled in the art that for convenience and simplicity of description, foregoing description device and The specific work process of module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, the mutual coupling of module can be electrical property, mechanical or other The coupling of form.
It, can also be in addition, can integrate in a processing module in each functional module in each embodiment of the application It is that modules physically exist alone, can also be integrated in two or more modules in a module.Above-mentioned integrated mould Block both can take the form of hardware realization, can also be realized in the form of software function module.
In conclusion scheme provided by the present application, is applied to server, server and multiple cameras communicate to connect, more A camera is distributed in different location, and the shooting area of two neighboring camera is adjacent in multiple cameras or there is part weight It closes.Camera by controlling the first shooting area in multiple cameras carries out image taking to mobile object, and receives first The shooting image of the camera of shooting area, using the shooting area that mobile object is presently in as current shooting region, when true When current shooting region will be left by making mobile object, mobile object is obtained by the second shooting area of entrance, second shooting Region partially overlaps with current shooting area adjacency or presence, then controls the corresponding camera of the second shooting area to mobile pair As being shot, and the shooting image of the camera of the second shooting area is received, when according to the shooting for shooting image received Between sequencing, the shooting image received is subjected to splicing synthesis, obtains the corresponding video file of mobile object, thus real Now the shooting image to mobile object in multiple shooting areas carries out splicing synthesis, obtains mobile object in multiple shooting area quilts The complete motion video of shooting, individually checks the shooting image of each shooting area without user, prompts user's body It tests.
Referring to FIG. 7, it illustrates a kind of structural block diagrams of server provided by the embodiments of the present application.The server 100 It can be Cloud Server, be also possible to traditional server.Server 100 in the application may include one or more such as lower part Part: processor 110, memory 120 and one or more application program, wherein one or more application programs can be stored In memory 120 and be configured as being executed by one or more processors 110, one or more programs be configured to carry out as Method described in preceding method embodiment.
Processor 110 may include one or more processing core.Processor 110 is whole using various interfaces and connection Various pieces in a server 100, by running or executing the instruction being stored in memory 120, program, code set or refer to Collection is enabled, and calls the data being stored in memory 120, the various functions and processing data of execute server 100.It is optional Ground, processor 110 can use Digital Signal Processing (Digital Signal Processing, DSP), field programmable gate Array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA) at least one of example, in hardware realize.Processor 110 can integrating central processor (Central Processing Unit, CPU), in image processor (Graphics Processing Unit, GPU) and modem etc. One or more of combinations.Wherein, the main processing operation system of CPU, user interface and application program etc.;GPU is for being responsible for Show the rendering and drafting of content;Modem is for handling wireless communication.It is understood that above-mentioned modem It can not be integrated into processor 110, be realized separately through one piece of communication chip.
Memory 120 may include random access memory (Random Access Memory, RAM), also may include read-only Memory (Read-Only Memory).Memory 120 can be used for store instruction, program, code, code set or instruction set.It deposits Reservoir 120 may include storing program area and storage data area, wherein the finger that storing program area can store for realizing operating system Enable, for realizing at least one function instruction (such as touch function, sound-playing function, image player function etc.), be used for Realize the instruction etc. of following each embodiments of the method.What storage data area can be created in use with storage server 100 Data (such as phone directory, audio, video data, chat record data) etc..
Referring to FIG. 8, it illustrates a kind of structural block diagrams of computer readable storage medium provided by the embodiments of the present application. Program code is stored in the computer-readable medium 800, said program code can be called by processor and execute above method reality Apply method described in example.
Computer readable storage medium 800 can be such as flash memory, EEPROM (electrically erasable programmable read-only memory), The electronic memory of EPROM, hard disk or ROM etc.Optionally, computer readable storage medium 800 includes non-volatile meter Calculation machine readable medium (non-transitory computer-readable storage medium).Computer-readable storage Medium 800 has the memory space for the program code 810 for executing any method and step in the above method.These program codes can With from reading or be written in one or more computer program product in this one or more computer program product. Program code 810 can for example be compressed in a suitable form.
Finally, it should be noted that above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although The application is described in detail with reference to the foregoing embodiments, those skilled in the art are when understanding: it still can be with It modifies the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;And These are modified or replaceed, do not drive corresponding technical solution essence be detached from each embodiment technical solution of the application spirit and Range.

Claims (11)

1. a kind of image processing method, which is characterized in that be applied to server, the server and multiple camera communication links It connects, the multiple camera is distributed in different location, and the shooting area of two neighboring camera is adjacent in the multiple camera Or exists and partially overlaps, which comprises
The camera for controlling the first shooting area in the multiple camera carries out image taking to mobile object, and described in reception The shooting image of the camera of first shooting area;
Using the shooting area that the mobile object is presently in as current shooting region, when determining that the mobile object will be from When opening the current shooting region, obtain second shooting area of the mobile object by entrance, second shooting area with The current shooting area adjacency or presence partially overlap;
It controls the corresponding camera of second shooting area to shoot the mobile object, and receives second shooting The shooting image of the camera in region;
According to the sequencing of the shooting time of the shooting image received, the shooting image received is subjected to splicing conjunction At obtaining the corresponding video file of the mobile object.
2. the method according to claim 1, wherein the first shooting area in the multiple camera of control Before the camera in domain carries out image taking to mobile object, the method also includes:
First shooting area is detected with the presence or absence of mobile object;
If first shooting area there are mobile object, executes the first shooting area in the multiple camera of control The step of camera in domain carries out image taking to mobile object.
3. according to the method described in claim 2, it is characterized in that, first shooting area be provided with it is logical with the server Believe that the human body detection device of connection, detection first shooting area whether there is mobile object, comprising:
Receive the detection data of the human body detection device;
According to the detection data, determine first shooting area with the presence or absence of mobile object.
4. the method according to claim 1, wherein described when determining that it is described that the mobile object will be left When current shooting region, the mobile object is obtained by before the second shooting area of entrance, the method also includes:
According to the shooting image of the camera in the current shooting region, the mobile data of the mobile object, the shifting are determined Dynamic data include at least moving direction and movement speed;
Based on the mobile data, determine whether the mobile object will leave the current shooting region.
5. according to the method described in claim 4, it is characterized in that, described ought determine that the mobile object will leave described work as When preceding shooting area, second shooting area of the mobile object by entrance is obtained, comprising:
When determining that the mobile object will leave the current shooting region, it is based on the moving direction, from the multiple Determine the mobile object by the second shooting area of entrance in the corresponding shooting area of camera.
6. the method according to claim 1, wherein in the shooting time according to the shooting image received Sequencing, the shooting image received is subjected to splicing synthesis, before obtaining the corresponding video file of the mobile object, The method also includes:
The shooting figure received described in judgement seems no to meet Video Composition condition;
If meeting the Video Composition condition, the sequencing of the shooting time according to the shooting image received is executed, The step of shooting image received is subjected to splicing synthesis, obtains the mobile object corresponding video file.
7. according to the method described in claim 6, it is characterized in that, the Video Composition condition includes:
The shooting image received includes at least there are two adjacent cameras in the multiple camera to the shifting The shooting image of dynamic object;And/or
The quantity that image is shot in the shooting image received is greater than specified threshold.
8. method according to claim 1-7, which is characterized in that in control second shooting area pair The camera answered shoots the mobile object, and receive second shooting area camera shooting image it Afterwards, the method also includes:
Repeat it is described using the shooting area that the mobile object is presently in as current shooting region, when determining the movement When object will leave the current shooting region, the step of obtaining the second shooting area of the mobile object by entrance, until institute It states the corresponding camera of control second shooting area to shoot the mobile object, and receives second shooting area The step of shooting image of the camera in domain, until the mobile object is not present in the shooting area of the multiple camera It is interior.
9. a kind of picture processing unit, which is characterized in that be applied to server, the server and multiple camera communication links It connects, the multiple camera is distributed in different location, and the shooting area of two neighboring camera is adjacent in the multiple camera Or exist and partially overlap, described device includes: the first control module, region acquisition module, the second control module and video Synthesis module, wherein
First control module be used to control the camera of the first shooting area in the multiple camera to mobile object into Row image taking, and receive the shooting image of the camera of first shooting area;
The region obtains module and is used for using the shooting area that the mobile object is presently in as current shooting region, when true When the current shooting region will be left by making the mobile object, the mobile object is obtained by the second of entrance and shoots area Domain, second shooting area partially overlap with the current shooting area adjacency or presence;
Second control module claps the mobile object for controlling the corresponding camera of second shooting area It takes the photograph, and receives the shooting image of the camera of second shooting area;
The Video Composition module is used for the sequencing of the shooting time according to the shooting image received, receives described Shooting image carry out splicing synthesis, obtain the corresponding video file of the mobile object.
10. a kind of server characterized by comprising
One or more processors;
Memory;
One or more application program, wherein one or more of application programs are stored in the memory and are configured To be executed by one or more of processors, one or more of programs are configured to carry out as claim 1-8 is any Method described in.
11. a kind of computer-readable storage medium, which is characterized in that be stored with journey in the computer-readable storage medium Sequence code, said program code can be called by processor and execute the method according to claim 1.
CN201910580499.0A 2019-06-28 2019-06-28 Image processing method, device, server and storage medium Pending CN110177258A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910580499.0A CN110177258A (en) 2019-06-28 2019-06-28 Image processing method, device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910580499.0A CN110177258A (en) 2019-06-28 2019-06-28 Image processing method, device, server and storage medium

Publications (1)

Publication Number Publication Date
CN110177258A true CN110177258A (en) 2019-08-27

Family

ID=67699522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910580499.0A Pending CN110177258A (en) 2019-06-28 2019-06-28 Image processing method, device, server and storage medium

Country Status (1)

Country Link
CN (1) CN110177258A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405203A (en) * 2020-03-30 2020-07-10 杭州海康威视数字技术股份有限公司 Method and device for determining picture switching, electronic equipment and storage medium
CN111950520A (en) * 2020-08-27 2020-11-17 重庆紫光华山智安科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN112202990A (en) * 2020-09-14 2021-01-08 深圳市百川安防科技有限公司 Video prerecording method, camera and electronic equipment
CN114697723A (en) * 2020-12-28 2022-07-01 北京小米移动软件有限公司 Video generation method, device and medium
CN115086545A (en) * 2021-03-15 2022-09-20 安讯士有限公司 Method and surveillance camera for processing a video stream
WO2023104222A3 (en) * 2021-12-09 2023-08-03 成都市喜爱科技有限公司 Intelligent video shooting method and apparatus, and electronic device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090052739A1 (en) * 2007-08-23 2009-02-26 Hitachi Kokusai Electric Inc. Human pursuit system, human pursuit apparatus and human pursuit program
CN104660998A (en) * 2015-02-16 2015-05-27 苏州阔地网络科技有限公司 Relay tracking method and system
CN105141824A (en) * 2015-06-17 2015-12-09 广州杰赛科技股份有限公司 Image acquisition method and image acquisition device
CN105245850A (en) * 2015-10-27 2016-01-13 太原市公安局 Method, device and system for tracking target across surveillance cameras
CN105530465A (en) * 2014-10-22 2016-04-27 北京航天长峰科技工业集团有限公司 Security surveillance video searching and locating method
CN108234961A (en) * 2018-02-13 2018-06-29 欧阳昌君 A kind of multichannel video camera coding and video flowing drainage method and system
CN108521557A (en) * 2018-04-10 2018-09-11 陕西工业职业技术学院 A kind of monitoring method and monitoring system being suitable for large-scale logistics warehouse camera
CN108542054A (en) * 2018-05-24 2018-09-18 周楚骎 A kind of child position State Feedback Approach and Intelligent bracelet
CN109510972A (en) * 2019-01-08 2019-03-22 中南林业科技大学 A kind of wild animal intelligent surveillance method based on Internet of Things
CN109729287A (en) * 2018-12-06 2019-05-07 浙江大华技术股份有限公司 A kind of method, apparatus and calculating equipment, storage medium of perimeter region monitoring

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090052739A1 (en) * 2007-08-23 2009-02-26 Hitachi Kokusai Electric Inc. Human pursuit system, human pursuit apparatus and human pursuit program
CN105530465A (en) * 2014-10-22 2016-04-27 北京航天长峰科技工业集团有限公司 Security surveillance video searching and locating method
CN104660998A (en) * 2015-02-16 2015-05-27 苏州阔地网络科技有限公司 Relay tracking method and system
CN105141824A (en) * 2015-06-17 2015-12-09 广州杰赛科技股份有限公司 Image acquisition method and image acquisition device
CN105245850A (en) * 2015-10-27 2016-01-13 太原市公安局 Method, device and system for tracking target across surveillance cameras
CN108234961A (en) * 2018-02-13 2018-06-29 欧阳昌君 A kind of multichannel video camera coding and video flowing drainage method and system
CN108521557A (en) * 2018-04-10 2018-09-11 陕西工业职业技术学院 A kind of monitoring method and monitoring system being suitable for large-scale logistics warehouse camera
CN108542054A (en) * 2018-05-24 2018-09-18 周楚骎 A kind of child position State Feedback Approach and Intelligent bracelet
CN109729287A (en) * 2018-12-06 2019-05-07 浙江大华技术股份有限公司 A kind of method, apparatus and calculating equipment, storage medium of perimeter region monitoring
CN109510972A (en) * 2019-01-08 2019-03-22 中南林业科技大学 A kind of wild animal intelligent surveillance method based on Internet of Things

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405203A (en) * 2020-03-30 2020-07-10 杭州海康威视数字技术股份有限公司 Method and device for determining picture switching, electronic equipment and storage medium
CN111950520A (en) * 2020-08-27 2020-11-17 重庆紫光华山智安科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN111950520B (en) * 2020-08-27 2022-12-02 重庆紫光华山智安科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN112202990A (en) * 2020-09-14 2021-01-08 深圳市百川安防科技有限公司 Video prerecording method, camera and electronic equipment
CN112202990B (en) * 2020-09-14 2021-05-11 深圳市睿联技术股份有限公司 Video prerecording method, camera and electronic equipment
CN114697723A (en) * 2020-12-28 2022-07-01 北京小米移动软件有限公司 Video generation method, device and medium
CN114697723B (en) * 2020-12-28 2024-01-16 北京小米移动软件有限公司 Video generation method, device and medium
CN115086545A (en) * 2021-03-15 2022-09-20 安讯士有限公司 Method and surveillance camera for processing a video stream
WO2023104222A3 (en) * 2021-12-09 2023-08-03 成都市喜爱科技有限公司 Intelligent video shooting method and apparatus, and electronic device

Similar Documents

Publication Publication Date Title
CN110177258A (en) Image processing method, device, server and storage medium
CN110267008A (en) Image processing method, device, server and storage medium
CN105763845B (en) The method and apparatus of vehicle monitoring
US8300890B1 (en) Person/object image and screening
CN109241933A (en) Video linkage monitoring method, monitoring server, video linkage monitoring system
CN106504580A (en) A kind of method for detecting parking stalls and device
CN110278413A (en) Image processing method, device, server and storage medium
CN104270565B (en) Image capturing method, device and equipment
CN109040693B (en) Intelligent alarm system and method
JP2014099922A (en) Method and device for capturing image
CN107948508A (en) A kind of vehicle-mounted end image capturing system and method
CN110267009A (en) Image processing method, device, server and storage medium
CN110390705A (en) A kind of method and device generating virtual image
US20100245596A1 (en) System and method for image selection and capture parameter determination
CN106559645A (en) Based on the monitoring method of video camera, system and device
CN108769604A (en) Processing method, device, terminal device and the storage medium of monitor video
CN109948525A (en) It takes pictures processing method, device, mobile terminal and storage medium
CN110191324B (en) Image processing method, image processing apparatus, server, and storage medium
CN103501410B (en) The based reminding method of shooting, the generation method of device and detection pattern, device
CN110147752A (en) Motion detection processing method, device, electronic equipment and storage medium
CN110278414A (en) Image processing method, device, server and storage medium
CN107852480A (en) A kind of information intelligent acquisition method and device
CN110267007A (en) Image processing method, device, server and storage medium
CN106028088A (en) Insertion method and device of media data
CN113269039A (en) On-duty personnel behavior identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190827

RJ01 Rejection of invention patent application after publication