CN110267009B - Image processing method, image processing apparatus, server, and storage medium - Google Patents

Image processing method, image processing apparatus, server, and storage medium Download PDF

Info

Publication number
CN110267009B
CN110267009B CN201910579270.5A CN201910579270A CN110267009B CN 110267009 B CN110267009 B CN 110267009B CN 201910579270 A CN201910579270 A CN 201910579270A CN 110267009 B CN110267009 B CN 110267009B
Authority
CN
China
Prior art keywords
image
content
cameras
images
shot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910579270.5A
Other languages
Chinese (zh)
Other versions
CN110267009A (en
Inventor
杜鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910579270.5A priority Critical patent/CN110267009B/en
Publication of CN110267009A publication Critical patent/CN110267009A/en
Application granted granted Critical
Publication of CN110267009B publication Critical patent/CN110267009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application discloses an image processing method, an image processing device, a server and a storage medium, wherein the image processing method is applied to the server and comprises the following steps: acquiring shot images of a plurality of cameras; according to moving objects existing in the shot images, grouping the shot images of the multiple cameras according to different moving objects to obtain multiple first image groups, wherein the first image groups are a set of shot images containing the same moving object; when a removal instruction for removing specified content from the shot images of the at least one first image group is received, removing the specified content existing in the shot images of the at least one first image group to obtain at least one second image group; and splicing and synthesizing the shot images of the at least one second image group according to the shooting time sequence of the shot images to obtain a video file corresponding to the at least one moving object. The method can generate the active video when the moving object moves in a plurality of ranges.

Description

Image processing method, image processing apparatus, server, and storage medium
Technical Field
The present application relates to the field of shooting technologies, and in particular, to an image processing method, an image processing apparatus, a server, and a storage medium.
Background
At present, with the wide use of shooting technology in daily life, people have more and more demands on video shooting. For example, there are increasing places to arrange monitoring systems, monitor the state and human activities of a certain area, and the like by using a camera. However, since the imaging area of the camera is limited, that is, the angle of view is limited, and the camera often can only image a certain fixed area, the content to be imaged is limited.
Disclosure of Invention
In view of the foregoing problems, the present application provides an image processing method, an image processing apparatus, a server, and a storage medium, which can obtain a surveillance video of a moving object in a wide range.
In a first aspect, an embodiment of the present application provides a picture processing method, which is applied to a server, where the server is in communication connection with a plurality of cameras, the cameras are distributed at different positions, and shooting areas of two adjacent cameras in the cameras are adjacent or partially overlapped, where the method includes: acquiring shot images of the plurality of cameras; according to moving objects existing in the shot images, grouping the shot images of the multiple cameras according to different moving objects to obtain multiple first image groups, wherein the first image groups are a set of shot images containing the same moving object, and the moving objects corresponding to each first image group are different; when a removal instruction for removing specified content from at least one captured image of the first image group is received, removing the specified content from the at least one captured image of the first image group to obtain at least one second image group; and splicing and synthesizing the shot images of the at least one second image group according to the shooting time sequence of the shot images to obtain a video file corresponding to the at least one moving object.
In a second aspect, an embodiment of the present application provides a picture processing apparatus, which is applied to a server, the server is in communication connection with a plurality of cameras, the plurality of cameras are distributed in different positions, shooting areas of two adjacent cameras in the plurality of cameras are adjacent or partially overlapped, and the apparatus includes: the device comprises an image acquisition module, an image grouping module, a content removal module and a video synthesis module, wherein the image acquisition module is used for acquiring shot images of the cameras; the image grouping module is used for grouping the shot images of the cameras according to different moving objects according to the moving objects existing in the shot images to obtain a plurality of first image groups, wherein the first image groups are a set of shot images containing the same moving object, and the moving objects corresponding to each first image group are different; the content removing module is used for removing specified content existing in the shot images of at least one first image group to obtain at least one second image group when receiving a removing instruction for removing the specified content of the shot images of at least one first image group; and the video synthesis module is used for splicing and synthesizing the shot images of the at least one second image group according to the shooting time sequence of the shot images to obtain a video file corresponding to the at least one moving object.
In a third aspect, an embodiment of the present application provides a server, including: one or more processors; a memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the image processing method provided by the first aspect above.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code can be called by a processor to execute the image processing method provided in the first aspect.
The scheme that this application provided is applied to the server, server and a plurality of camera communication connection, and a plurality of cameras distribute in different positions, and the shooting region of two adjacent cameras in a plurality of cameras is adjoint or has some coincidences. The method comprises the steps of obtaining shot images of a plurality of cameras, grouping the shot images of the plurality of cameras according to different moving objects according to the moving objects existing in the shot images to obtain a plurality of first image groups, wherein the first image groups are a set of the shot images containing the same moving object, removing specified contents existing in the shot images of at least one first image group when a removal instruction for removing the specified contents of the shot images of at least one first image group is received to obtain at least one second image group, splicing and synthesizing the shot images of at least one second image group according to the shooting time sequence of the shot images to obtain a video file corresponding to at least one moving object, so that the shot images of the moving object in a plurality of shooting areas are spliced and synthesized to obtain a complete monitoring video of the moving object shot in the plurality of shooting areas, the monitoring effect is improved, the designated content in the video file is removed, the interference of the designated content to the user is avoided, the user does not need to check the shot images of all the shooting areas independently, and the user experience is prompted.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of a distributed system provided by an embodiment of the present application.
FIG. 2 shows a flow diagram of an image processing method according to one embodiment of the present application.
FIG. 3 illustrates a grouping diagram of image groupings provided according to one embodiment of the present application.
FIG. 4 shows a flow diagram of an image processing method according to another embodiment of the present application.
FIG. 5 shows a flow diagram of an image processing method according to yet another embodiment of the present application.
Fig. 6 shows a flowchart of step S320 in an image processing method according to yet another embodiment of the present application.
FIG. 7 shows a block diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 8 shows a block diagram of a content removal module in an image processing apparatus according to an embodiment of the present application.
FIG. 9 shows another block diagram of a content removal module in an image processing apparatus according to one embodiment of the present application.
FIG. 10 shows a block diagram of an image grouping module in an image processing apparatus according to an embodiment of the present application.
Fig. 11 is a block diagram of a server for executing an image processing method according to an embodiment of the present application.
Fig. 12 is a storage unit for storing or carrying program codes for implementing an image processing method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
With the development of society and the advancement of technology, monitoring systems are arranged in more and more places, and in most application scenarios of monitoring through the monitoring systems, the used cameras can only monitor a certain fixed area. Therefore, only videos monitored in a single area can be formed, and when a user needs to check videos monitored in a plurality of areas, the user can only check the monitored videos in each area respectively. In order to solve the above problems, a panoramic video monitoring system is developed in the market, the panoramic video monitoring system utilizes cameras arranged at different positions to shoot images, and the shot images are synthesized into a panoramic image, but the obtained range of the panoramic image is too large, so that a certain object cannot be monitored, and when a user needs to check the monitoring video of the certain object, the user needs to check the panoramic monitoring video with more monitoring contents, which causes poor user experience.
In view of the above problems, the inventors have long studied and found that the image processing method, apparatus, server and storage medium provided in the embodiments of the present application perform monitoring shooting through a plurality of cameras distributed at different positions, then group shot images of the plurality of cameras according to differences, remove designated contents from the grouped shot images, and then perform splicing synthesis according to the grouped shot images to form video files corresponding to different moving objects, and remove the designated contents from the video files, so that interference of the designated contents to users can be reduced, and users can view the video files conveniently. The specific image processing method is described in detail in the following embodiments.
The following description will be made with respect to a distributed system to which the image processing method provided in the embodiment of the present application is applied.
Referring to fig. 1, fig. 1 shows a schematic diagram of a distributed system provided in an embodiment of the present application, where the distributed system includes a server 100 and a plurality of cameras 200 (the number of the cameras 200 shown in fig. 1 is 4), where the server 100 is connected to each camera 200 in the plurality of cameras 200, respectively, and is used to perform data interaction with each camera 200, for example, the server 100 receives an image sent by the camera 200, the server 100 sends an instruction to the camera 200, and the like, which is not limited specifically herein. In addition, the server 100 may be a cloud server or a traditional server, the camera 200 may be a gun camera, a hemisphere camera, a high-definition smart sphere camera, a pen container camera, a single-board camera, a flying saucer camera, a mobile phone camera, etc., and the lens of the camera may be a wide-angle lens, a standard lens, a telephoto lens, a zoom lens, a pinhole lens, etc., and is not limited herein.
In some embodiments, the plurality of cameras 200 are disposed at different positions for photographing different areas, and photographing areas of each two adjacent cameras 200 of the plurality of cameras 200 are adjacent or partially coincide. It can be understood that each camera 200 can correspondingly shoot different areas according to the difference of the angle of view and the setting position, and the shooting areas of every two adjacent cameras 200 are arranged to be adjacent or partially overlapped, so that the area to be shot by the distributed system can be completely covered. The plurality of cameras 200 may be arranged side by side at intervals in a length direction, and configured to capture images in the length direction area, or the plurality of cameras 200 may also be arranged at intervals in a ring direction, and configured to capture images in the ring area, and of course, the plurality of cameras 200 may further include other arrangement modes, which are not limited herein.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an image processing method according to an embodiment of the present application. The image processing method is used for monitoring and shooting through a plurality of cameras distributed at different positions, then grouping the shot images of the cameras according to different differences, removing designated contents of the shot images in the grouping, splicing and synthesizing the shot images according to the grouped shot images to form video files corresponding to different moving objects, removing the designated contents from the video files, reducing the interference of the designated contents to users, and facilitating the viewing of the users. In a specific embodiment, the image processing method is applied to the image processing apparatus 400 shown in fig. 7 and the server 100 (fig. 11) configured with the image processing apparatus 400. The specific flow of the embodiment will be described below by taking a server as an example, and it is understood that the server applied in the embodiment may be a cloud server, and may also be a traditional server, which is not limited herein. The multiple cameras are distributed at different positions, and shooting areas of two adjacent cameras in the multiple cameras are adjacent or partially overlapped. As will be described in detail with respect to the flow shown in fig. 2, the image processing method may specifically include the following steps:
step S110: and acquiring shot images of the plurality of cameras.
In the embodiment of the application, the plurality of cameras distributed at different positions can shoot the shooting areas, and each camera can shoot the shot image of the corresponding shooting area. The plurality of cameras can upload shot images to the server, and the server can receive shot images uploaded by the plurality of cameras. The plurality of cameras are distributed at different positions, and the shooting areas of two adjacent cameras are adjacent or partially overlapped, so that the server can acquire the shooting images of different shooting areas, and the shooting areas can form a complete area, namely, the server can acquire the shooting image of a large-range complete area. The method for uploading the shot images by the camera is not limited, and for example, the shot images may be uploaded according to a set interval duration.
In some embodiments, each of the plurality of cameras may be in an on state, so that the entire shooting area corresponding to the plurality of cameras may be shot, wherein each of the plurality of cameras may be in an on state at a set time period or all the time. Of course, each camera in the multiple cameras may also be in an on state or an off state according to the received control instruction, and the control instruction may include an instruction automatically sent by a server connected to the camera, an instruction sent by the electronic device to the camera through the server, an instruction generated by a user through triggering the camera, and the like, which is not limited herein.
Step S120: and according to moving objects existing in the shot images, grouping the shot images of the multiple cameras according to different moving objects to obtain multiple first image groups, wherein the first image groups are a set of shot images containing the same moving object, and the moving objects corresponding to the first image groups are different.
In the embodiment of the application, after acquiring the shot images of the multiple cameras, the server can acquire the moving object existing in the shot images. The moving object may be a movable object such as a person, an animal, or a vehicle, and is not limited herein. In some embodiments, the server may identify the moving object in the captured image through feature information of the moving object, for example, the server may identify a moving person or a moving animal through facial features, body type features, and the like, and may identify a moving vehicle through vehicle features (license plate, vehicle color, vehicle type, and the like), which is not limited herein.
After acquiring the moving object in the captured images, the server may obtain captured images in which the moving object exists in the captured images of the plurality of cameras. The server can group the shot images with the moving objects in the shot images of the cameras in real time according to different moving objects to obtain a plurality of first image groups, wherein the plurality of first image groups correspond to the plurality of moving objects one to one, namely each first image group corresponds to different moving objects. Thus, the server can obtain all the photographed images of the moving object photographed. It is understood that since a plurality of moving objects may exist in the captured images, the same moving object may exist in the captured images of different first image groups.
For example, referring to fig. 3, when the moving object includes object 1, object 2, and object 3, and object 1, object 2, and object 3 move in different camera areas, the cameras in the corresponding areas capture images, and the captured images corresponding to object 1, object 2, and object are grouped, so that an image group corresponding to object 1, an image group corresponding to object 2, and an image group corresponding to object 3 can be obtained.
Step S130: when a removal instruction for removing specified content from at least one captured image of the first image group is received, the specified content existing in the captured image of the at least one first image group is removed, and at least one second image group is obtained.
In the embodiment of the application, the server may receive a removal instruction sent by the electronic device, where the removal instruction is used to instruct the server to specify content in the captured images of the at least one first image group.
In some embodiments, the user sends the removal instruction to the server through the electronic device, so as to meet the requirement that the user wants to remove the specified content in the captured image of a certain first image group, so that the specified content is not contained in the video file obtained according to the first image group later.
In some embodiments, the removal instruction received by the server may carry the selected at least one first image group and the specified content to be removed, where the specified content may include the target person, the target object, the target background, and the like.
In some embodiments, the server may determine, according to the received removal instruction, at least one first image group required by the user to remove the specified content, and determine specific content of the specified content. The server may perform removal of the specified content for the captured image in the at least one first image group based on the determined at least one first image group and the specified content. The server can identify the designated content of each captured image in the first image group when the designated content existing in the captured image of at least one first image group is removed, and if the designated content exists in the captured image, the designated content in the captured image is removed, so that the server can remove the designated content existing in the captured image of the first image group after the operation is performed on each image of the first image group. The server may identify the designated content according to feature information of the designated content, image content, and the like, which is not limited herein.
In some embodiments, the removing of the designated content in the captured image may include: and cutting the specified content in the shot image or blurring the specified content. The blurring of the designated content may be, but is not limited to, reducing the sharpness, brightness, etc. of the designated content.
Step S140: and splicing and synthesizing the shot images of the at least one second image group according to the shooting time sequence of the shot images to obtain a video file corresponding to the at least one moving object.
In this embodiment of the application, after obtaining the second image group corresponding to the mobile object, the server may perform stitching synthesis on the captured images in the at least one image group according to the order of the capturing time of the captured images, so as to obtain the video file corresponding to the at least one mobile object.
In some embodiments, the server may acquire the photographing time of the photographed image from the stored file information of the photographed image. The camera can send the shooting time to the server as one of the description information of the shot images when uploading the shot images, so that the server can obtain the shooting time of the shot images when receiving the shot images. Of course, the manner in which the server acquires the shooting time of the shot image is not limited, and for example, the server may search for the shooting time of the shot image from the camera.
In some embodiments, the server may sort the shooting times of all the shot images in each second image group from first to last, so as to obtain the sequence of the shooting times. And then according to the sequence, all the shot images of the second image group are spliced and synthesized to obtain the video file of the moving object corresponding to the second image group. That is, the captured images in the second image group constitute each frame of image in the video file, and the order of the images in the video file is the same as the order of the capturing time.
In some embodiments, the server may also send the video file to an electronic device or a third-party platform (e.g., APP, web mailbox, etc.) for convenient viewing and downloading by the user, so that the user can directly view the video formed by the captured images of the moving object.
In addition, because time is needed for moving the moving object from one shooting area to another shooting area, different cameras can shoot the moving object in sequence, so that the shooting time for shooting images has a sequence, and the behavior track of the moving object in the shooting area formed by the cameras can be reflected in the video spliced and synthesized by the server according to the sequence of the shooting time. And because the shooting areas of two adjacent cameras in the cameras are adjacent or partially overlapped, the shooting area formed by the cameras is a complete area, and therefore the spliced and synthesized video file can reflect the activity change of the content of a moving object in a larger area.
According to the image processing method provided by the embodiment of the application, shot images shot by a plurality of cameras distributed at different positions are grouped according to the moving object degree of the shot images, then a removal instruction for removing specified contents of the shot images in at least one grouped image is received, and based on the instruction, after the specified contents of the shot images in at least one group are removed, the shot images in the group are spliced and synthesized according to the shot images in the group, so that video files corresponding to different moving objects are formed, the monitoring effect is improved, the specified contents in the video files are removed, the interference of the specified contents to users is avoided, the users do not need to check the shot images in all shooting areas independently, and the user experience is prompted.
Referring to fig. 4, fig. 4 is a flowchart illustrating an image processing method according to another embodiment of the present application. The method is applied to the server, the server is in communication connection with a plurality of cameras, the cameras are distributed at different positions, and shooting areas of two adjacent cameras in the cameras are adjacent or partially overlapped. As will be described in detail with respect to the flow shown in fig. 4, the image processing method may specifically include the following steps:
step S210: and acquiring shot images of the plurality of cameras.
In the embodiment of the present application, step S210 may refer to the contents of the above embodiments, and is not described herein again.
Step S220: and according to moving objects existing in the shot images, grouping the shot images of the multiple cameras according to different moving objects to obtain multiple first image groups, wherein the first image groups are a set of shot images containing the same moving object, and the moving objects corresponding to the first image groups are different.
In the embodiment of the present application, the server may acquire a captured image in which at least one moving object exists from captured images of a plurality of cameras according to feature information of the moving object. Wherein the feature information of the mobile object can characterize the external features of the mobile object.
In some embodiments, when the moving object is a person, the feature information of the person may include a face image, a wearing feature, a body shape feature, a gait feature, and the like, and the feature information of the specific person may not be limited. When the moving object is an animal, the characteristic information of the animal may include fur characteristics, embodiment characteristics, and the like, and is not limited herein. When the moving object is a vehicle, the characteristic information of the vehicle may include license plate information, vehicle color information, vehicle type information, and the like, and is not limited herein.
In some embodiments, the moving object may include a person, and the feature information may include a face image. Therefore, the server can determine the shot image with at least one human face from the shot images of the plurality of cameras, so as to obtain the shot image with at least one moving object in the shot images of the plurality of cameras.
After obtaining the captured images of at least one moving object in the captured images of the plurality of cameras, the server may group all the captured images with at least one moving object according to different moving objects to obtain a plurality of first image groups, where the first image groups are a set of captured images including the same moving object, and the moving objects corresponding to each of the first image groups are different.
Step S230: and sending at least one shot image of each of the plurality of first image groups to the electronic device.
In some embodiments, after obtaining the plurality of first image groups, the server may select at least one captured image from each of the plurality of first image groups, and then send the at least one captured image corresponding to each of the plurality of first image groups to the electronic device, so that a user corresponding to the electronic device selects at least one first image group from which specified content needs to be removed and selects specified content from the captured images which needs to be removed, and when the selected first image groups are multiple, different specified content may be selected for different first image groups. The user can select a target area in the shot image, and the content of the target area is used as the designated content selected by the user.
In some embodiments, at least one captured image is selected from each of the plurality of first image groups, and the selection may be performed according to a set rule. As an embodiment, the captured image selected according to the setting rule may be an image of the first image group when the moving object is at an intermediate position of the capturing area. Of course, the specific setting rule may not be limiting.
After receiving the at least one captured image corresponding to each first image group, the electronic device may display a selection interface for selecting the removed content according to the at least one captured image of each first image group in the plurality of first image groups, so that a user selects the first image group, and selects a target area from the at least one captured image corresponding to the selected first image group, where the content of the target area is used as the selected designated content. And the electronic equipment can generate a removal instruction according to the at least one first image group selected by the user and the selected specified content, and send the removal instruction to the server. Therefore, after the user views at least one shot image in each first image group, the user can select at least one first image group needing to remove the specified content and the specified content needing to be removed, and the user experience is improved.
Step S240: and receiving a removal instruction sent by the electronic equipment, wherein the removal instruction is sent by the electronic equipment when the selection interface for selecting the removal content is detected to select the content of the target area in the shot image corresponding to at least one first image group in the selection interface after the selection interface for selecting the removal content is displayed according to at least one shot image of each first image group in the plurality of first image groups.
In some embodiments, after receiving the removal instruction, the server may obtain, according to the removal instruction, at least one first image group selected by the user and the specified content, where the specified content is content of the selected target area in the captured image corresponding to the first image group.
Step S250: and responding to the removal instruction, and identifying the content of the target area to obtain the characteristic information of the content of the target area.
In some embodiments, after obtaining the removal instruction, the server may identify the content of the target area to obtain feature information of the content of the target area, where the feature information may be an image feature. For example, when the content of the target area is a person, the feature information may be a face feature, a body shape feature, or the like, and when the content of the target area is an object, the feature information may be an appearance feature of the object, or the like, which is not limited herein.
Step S260: and removing the content matched with the characteristic information in each shot image of the at least one first image group based on the characteristic information to obtain at least one second image group.
After the server identifies the feature information of the target area selected by the user, each shot image can be matched with the feature information, and the shot image with the content matched with the feature information is determined. The server may then remove the content matching the feature information in the captured image for the captured image in which the content matching the feature information exists. The server can obtain at least one second image group after processing each shot image of the at least one first image group.
Step S270: and splicing and synthesizing the shot images of the at least one second image group according to the shooting time sequence of the shot images to obtain a video file corresponding to the at least one moving object.
In the embodiment of the present application, step S270 may refer to the contents of the foregoing embodiments, and is not described herein again.
According to the image processing method provided by the embodiment of the application, shot images shot by a plurality of cameras distributed at different positions are grouped according to the moving object degree of the shot images, then at least one shot image of each group is sent to the electronic equipment, according to a removal instruction returned by the electronic equipment, after specified contents of the shot images in at least one group are removed, the shot images in the group are spliced and synthesized to form video files corresponding to different moving objects, the monitoring effect is improved, the specified contents removed by user requirements in the video files are removed, the interference of the specified contents to users is avoided, the users do not need to check the shot images in all shooting areas independently, and user experience is prompted.
Referring to fig. 5, fig. 5 is a flowchart illustrating an image processing method according to another embodiment of the present application. The method is applied to the server, the server is in communication connection with a plurality of cameras, the cameras are distributed at different positions, and shooting areas of two adjacent cameras in the cameras are adjacent or partially overlapped. As will be described in detail with respect to the flow shown in fig. 5, the image processing method may specifically include the following steps:
step S310: and acquiring shot images of the plurality of cameras.
In the embodiment of the present application, the step S310 may refer to the contents of the foregoing embodiments, and is not described herein again.
Step S320: and according to moving objects existing in the shot images, grouping the shot images of the multiple cameras according to different moving objects to obtain multiple first image groups, wherein the first image groups are a set of shot images containing the same moving object, and the moving objects corresponding to the first image groups are different.
In some embodiments, referring to fig. 6, step S320 may include:
step S321: and acquiring shot images meeting screening conditions including images shot by the cameras in a specified area or images shot in a specified time period from the shot images of the plurality of cameras.
After the server acquires the shot images of the multiple cameras and before the shot images are grouped, the shot images of the multiple cameras can be screened so as to meet the requirements of users.
In some embodiments, the screening conditions for screening the captured images of the plurality of cameras may include: images taken by cameras in a designated area or images taken in a designated time period. It will be appreciated that the user may desire to view the live video of the mobile object in a specified area, and thus may select the specified area as the filtering condition. The user may also need to view the active video of the mobile object in a specified time period, so that the specified time period can be selected as the filtering condition.
In one embodiment, the screening condition includes an image captured by a camera in a designated area. Before the grouping the shot images of the plurality of cameras according to different moving objects according to the moving objects existing in the shot images, the method further comprises:
sending data of a plurality of shooting areas corresponding to the plurality of cameras to a mobile terminal, wherein the plurality of cameras correspond to the plurality of shooting areas one to one; and receiving a selection instruction of a designated area in the plurality of shooting areas sent by the mobile terminal, obtaining the designated area according to the selection instruction, and sending the selection instruction when detecting the selection operation of the plurality of shooting monitoring areas in the selection interface after the mobile terminal displays the selection interface according to the data of the plurality of shooting areas.
The server can acquire the shooting area corresponding to each camera and send the acquired data of the multiple shooting areas to the mobile terminal, so that the mobile terminal can generate a selection interface according to the data of the multiple shooting areas. The selection interface generated by the mobile terminal can comprise a whole area formed by a plurality of shooting areas, the whole area is divided into a plurality of shooting areas corresponding to a plurality of cameras, the plurality of cameras correspond to the plurality of shooting areas one by one, and a user can select a required shooting area.
In some embodiments, the data of the shooting area sent by the server to the mobile terminal may include a name of the shooting area, so that the mobile terminal may mark the name of the shooting area in the selection interface, and a user can know the shooting area conveniently. For example, in a home monitoring scene, the multiple cameras include a camera 1, a camera 2, a camera 3, a camera 4 and a camera 5, a shooting area corresponding to the camera 1 is a bedroom 1, a shooting area corresponding to the camera 2 is a bedroom 2, a shooting area corresponding to the camera 3 is a living room, a shooting area corresponding to the camera 4 is a kitchen, a shooting area corresponding to the camera 5 is a study room, the mobile terminal can display the name of each shooting area in the multiple shooting areas of the display interface, and a moving video of a moving object required by a user in a selected designated area can be obtained subsequently.
In some embodiments, when receiving a selection instruction sent by the electronic device, the server may determine, after determining a designated area selected by a user according to the selection instruction, whether the designated area is a continuous area, that is, whether there is a discontinuous area in the designated area. For example, the shooting area 1, the shooting area 2, the shooting area 3, and the shooting area 4 are adjacent in sequence, and if the designated area selected by the user is the shooting area 1, the shooting area 3, and the shooting area 4, the designated area is the intermittent shooting area 2, and thus the designated area selected by the user is not a continuous area.
Further, if the server determines that the designated area selected by the user is not a continuous area, prompting content for prompting the user that the designated area is discontinuous may be sent to the electronic device, so that the user may reselect the designated area. It can be understood that, generally, a user needs to view a moving video of a moving object in a continuous area, and if the specified area is not a continuous area, the requirement of the user cannot be met, so that the prompt content is sent to prompt the user, and the user can conveniently select the continuous area as the specified area.
Of course, the user may also send an instruction to the server to determine the selected designated area (non-continuous area) through the electronic device, and the server may perform the screening of the captured image according to the currently selected designated area.
In some embodiments, the images shot by the camera in the designated area and the images shot in the designated time period can be used as the screening conditions at the same time, so that the moving video of the moving object in the shooting area and the time period required to be viewed by the user can be obtained conveniently. Of course, the specific screening conditions may not be limiting.
Step S322: and according to moving objects existing in the shot images meeting the screening conditions, grouping the shot images meeting the screening conditions according to different moving objects to obtain a plurality of first image groups.
After the captured images of the plurality of cameras are screened, the captured images meeting the screening condition may be grouped according to different moving objects according to the screened captured images, so as to obtain a plurality of first image groups.
Step S330: a plurality of types of contents in which the captured image of each of the plurality of first image groups exists are identified.
In the embodiment of the present application, the server may identify a plurality of types of contents in which the captured image of each of the plurality of first image groups exists. Wherein the plurality of types of content may include: people, articles, animals, plants, etc., without limitation.
Step S340: and sending the plurality of types of content corresponding to each first image group in the plurality of first image groups to the electronic equipment.
After obtaining the multiple types of content existing in the captured image of each first image group in the first image group, the server may send the multiple types of content corresponding to each first image group to the electronic device, so that the electronic device displays a selection interface for selecting to remove the content according to the multiple types of content, where the selection interface includes the multiple types of content corresponding to each first image group, and thus, a user may select, according to the selection interface, the first image group from which the specified content needs to be removed and at least one type of content in the first image group that needs to be removed.
Step S350: and receiving a removal instruction sent by the electronic equipment, wherein the removal instruction is sent by the electronic equipment when the selection operation of the appointed type content corresponding to at least one first image group in the selection interface is detected after the selection interface for selecting the removal content is displayed according to the plurality of types of content corresponding to each first image group in the plurality of first image groups.
In some embodiments, after receiving the removal instruction, the server may obtain, according to the removal instruction, at least one first image group selected by the user and the specified type of content that needs to be removed.
Step S360: and in response to the removal instruction, removing the specified type of content existing in each captured image of the at least one first image group to obtain at least one second image group.
In some embodiments, the server may identify a specified type of content present in each captured image of the first image group and then remove the specified type of content present in each captured image, thereby obtaining the second image group with the specified type of content removed. The server may identify whether the specific type of content exists in the captured image according to the feature information of the specific type of content, for example, when the specific type of content is a vehicle type of content, the captured image may be identified according to the feature information of the vehicle, and if content matching the feature information of the vehicle exists in the captured image, it is determined that the vehicle type of content exists in the captured image.
In some embodiments, before removing the specified content present in the captured image of at least one of the first image groups, the method may further include:
judging that the number of target shooting images in the at least one first image group is smaller than a first set threshold value, wherein the target shooting images are shooting images with the appointed content; if the first set threshold value is smaller than the first set threshold value, sending prompt content to the electronic equipment, wherein the prompt content is used for prompting whether the specified content existing in the shot image of the at least one first image group is removed; upon receiving the determination instruction, then performing removal of the specified content present in the captured image of at least one of the first image groups.
In some embodiments, when the number of the target captured images is not greater than the first image group with the first set threshold, the user may not be disturbed, so that the user may be prompted to determine whether to perform the removal of the designated content for the first image group with the number of the target captured images not greater than the first set threshold.
When the number of the target captured images is larger than the first set threshold, the captured images indicating that the designated content exists in the first target image group are more, and the interference to the user is highly likely to be caused, so that the server can remove the designated content from the first image group in which the number of the target captured images is larger than the first set threshold. The first image group is screened by using the removing conditions, so that the number of the first image groups with the appointed contents removed by the server can be reduced, and the processing efficiency of the server is improved.
Step S370: and splicing and synthesizing the shot images of the at least one second image group according to the shooting time sequence of the shot images to obtain a video file corresponding to the at least one moving object.
In some embodiments, after obtaining the plurality of second image groups, the server may further filter the captured images in the second image group corresponding to each moving object before performing stitching synthesis on the captured images in each of the plurality of second image groups.
As an embodiment, the server may determine whether the second image group includes captured images of two adjacent cameras at the same time on the moving object; if the second image group comprises images shot by two adjacent cameras at the same time for the moving object, acquiring a first target image shot by a first camera in the two adjacent cameras at the same time and acquiring a second target image shot by a second camera in the two adjacent cameras at the same time; and screening an image with the optimal image quality parameter of the moving object from the first target image and the second target image.
In this embodiment, the areas shot by two adjacent cameras partially overlap, so when the moving object is located in the overlapping portion of the areas shot by the two adjacent cameras, the two adjacent cameras may shoot the moving object at the same time, and therefore the captured image acquired by the server includes two captured images including the moving object and shot at the same time. Therefore, a target image (referred to as a first target image) captured by a first camera (one of the two adjacent cameras) at the same time and a target image (referred to as a second target image) captured by a second camera (the other camera) at the same time can be acquired.
Further, after acquiring the first target image and the second target image, the server may acquire an image quality parameter of the moving object in the first target image (denoted as a first image quality parameter) and an image quality parameter of the moving object in the second target image (denoted as a second image quality parameter), respectively. The image quality parameters may include sharpness, brightness, sharpness, lens distortion, color, resolution, color gamut, purity, and the like, which are not limited herein.
After obtaining the first image quality parameter and the second image quality parameter, the server may compare image quality effects corresponding to the first image quality parameter and the second image quality parameter, obtain an optimal image quality parameter from the first image quality parameter and the second image quality parameter according to a comparison result, and then screen an image corresponding to the optimal image quality parameter from the first target image and the second target image. For example, when the image quality parameter includes a sharpness, an image corresponding to the highest sharpness may be obtained from the first target image and the second target image.
After the server performs the image screening on each second image group, the server may perform stitching and synthesizing on the captured images of each second image group subjected to the image screening to obtain video files corresponding to a plurality of moving objects.
In some embodiments, after the video files corresponding to the plurality of mobile objects are acquired, the server may send the video files corresponding to the plurality of mobile objects to the electronic device. As one way, the server may send video files corresponding to a plurality of moving objects to the same electronic device. As another mode, the video file corresponding to each moving object is sent to the electronic device corresponding to each moving object.
In some embodiments, the sending, by the server, the video files corresponding to the plurality of mobile objects to the electronic device may include: detecting whether the shot images of other moving objects exist in the second image group corresponding to each moving object; and if at least the shot images of other moving objects exist in the second image group corresponding to the first moving object, sending the video files corresponding to the plurality of moving objects to the electronic equipment and sending prompt contents to the electronic equipment, wherein the prompt contents are used for prompting that the video files corresponding to the first moving object have the contents of other moving objects.
It can be understood that, in the video file corresponding to a certain obtained moving object, there may be contents of other moving objects, and the other moving objects are also contents of interest to the user. Therefore, it is possible to determine whether the video file corresponding to the moving object includes the content of the other moving object by determining whether the second image group corresponding to the moving object includes the captured image of the other moving object.
Further, if at least the shot images of other moving objects exist in the second image group corresponding to the first moving object, the server may send the prompting content to the electronic device while sending the video files corresponding to the plurality of moving objects to the electronic device, so that the user may be prompted that the contents of other moving objects exist in the video file corresponding to the first moving object.
In some embodiments, the server may also mark other mobile objects in the video file, facilitating the user to view the other mobile objects in the video file. For example, in a scene for police to check evidence, if the content of the suspect B exists in the video file corresponding to the victim a, the content of the suspect B is marked, so that the police can conveniently and quickly find out the video evidence.
Of course, the above embodiment may also be applied to the foregoing embodiment, that is, the above-described manner of determining whether the video file corresponding to the moving object has the content of another moving object, and sending the prompt content may also be applied to the foregoing embodiment.
In some embodiments, the image processing method may also be applied to monitoring a scene of a moving object. The video file corresponding to the mobile object obtained by the server can be used as a monitoring video corresponding to the mobile object, and the server can send the monitoring video to the monitoring terminal corresponding to the mobile object, so that a user corresponding to the monitoring terminal can know the activity condition of the mobile object in time. For example, the mobile object may be an old person or a child, and the monitoring terminal may correspond to a guardian of the old person or the child, so that the guardian can timely know the condition of the old person or the child at home, and the occurrence of an accident is avoided.
Further, after the server obtains the monitoring video of the mobile object, the server can automatically analyze the monitoring video to judge whether the mobile object in the monitoring video is abnormal, and when the judgment result represents that the mobile object is abnormal, alarm information can be sent to the monitoring terminal corresponding to the mobile object, so that a user corresponding to the monitoring terminal can timely perform corresponding processing. The abnormal condition may include falling, lying down, crying, onset of disease, etc., which is not limited herein. In addition, when the server sends the alarm information to the monitoring terminal, the server can also send the monitoring video or the video clip corresponding to the abnormal condition to the monitoring terminal, so that a user corresponding to the monitoring terminal can know the real condition of the mobile object in time.
In some embodiments, after receiving the monitoring video sent by the server, the monitoring terminal may send target instruction information to the server based on the monitoring video, and accordingly, after receiving the target instruction information, the server performs a corresponding operation in response to the target instruction information, for example, dialing an alarm call, dialing an emergency call, and the like.
The image processing method provided by the embodiment of the application screens second shot images meeting screening conditions from the shot images of a plurality of cameras distributed at different positions, then groups the selected shot images according to different moving objects to obtain a plurality of first image groups, removes the content of the specified type selected by a user from the shot images in the plurality of first image groups to obtain a plurality of second image groups, and then splices and synthesizes the shot images of the plurality of second image groups respectively to form video files corresponding to different moving objects, removes the interference information of the specified content from the video files, improves the monitoring effect, removes the content of the specified type which is required to be removed by the user in the video files, avoids the interference of the content of the specified type to the user, and does not need the user to independently check the shot images of each shooting area, and prompting the user for experience.
Referring to fig. 7, a block diagram of an image processing apparatus 400 according to an embodiment of the present application is shown, where the image processing apparatus 400 is applied to the server, and the server is connected to a plurality of cameras in a communication manner, the plurality of cameras are distributed at different positions, and shooting areas of two adjacent cameras in the plurality of cameras are adjacent or partially overlap. The image processing apparatus 400 includes: an image acquisition module 410, an image grouping module 420, a content removal module 430, and a video composition module 440. The image acquisition module 410 is configured to acquire captured images of the multiple cameras; the image grouping module 420 is configured to group the captured images of the multiple cameras according to different moving objects according to moving objects existing in the captured images to obtain multiple first image groups, where the first image groups are a set of captured images including a same moving object, and a moving object corresponding to each first image group is different; the content removing module 430 is configured to, when a removal instruction for removing specified content from captured images of at least one of the first image groups is received, remove the specified content from the captured images of at least one of the first image groups to obtain at least one second image group; the video synthesizing module 440 is configured to splice and synthesize the captured images of the at least one second image group according to the sequence of the capturing time of the captured images, so as to obtain a video file corresponding to the at least one moving object.
In some embodiments, the image processing apparatus 400 may further include: the device comprises an image sending module and a first instruction receiving module. The image sending module is used for sending at least one shot image of each first image group in the plurality of first image groups to the electronic equipment before the specified content existing in the shot images of at least one first image group is removed when a removal instruction for removing the specified content of the shot images of at least one first image group is received; the first instruction receiving module is used for receiving a removal instruction sent by the electronic equipment, and the removal instruction is sent by the electronic equipment when the selection of the content of a target area in the shot image corresponding to at least one first image group in the selection interface is detected after a selection interface used for selecting the removal content is displayed according to at least one shot image in each first image group in the plurality of first image groups.
In this embodiment, referring to fig. 8, the content removing module 430 may include: a content identification unit 431 and a removal execution unit 432. The content identification unit 431 is configured to identify the content of the target area in response to the removal instruction, and obtain feature information of the content of the target area; the removal executing unit 432 is configured to remove, based on the feature information, content that matches the feature information in each captured image of the at least one first image group, so as to obtain at least one second image group.
In some implementations, the specified content includes a specified type of content. The image processing apparatus 400 may further include: the device comprises a content identification module, a content sending module and a second instruction receiving module. A content identification module for identifying a plurality of types of content existing in the captured image of each of the plurality of first image groups before removing the specified content existing in the captured image of at least one of the first image groups when the removal instruction for removing the specified content from the captured image of at least one of the first image groups is received; the content sending module is used for sending the plurality of types of content corresponding to each first image group in the plurality of first image groups to the electronic equipment; the second instruction receiving module is used for receiving a removal instruction sent by the electronic device, and the removal instruction is sent by the electronic device when a selection operation on a specified type of content corresponding to at least one first image group in the selection interface is detected after a selection interface used for selecting the removal of the content is displayed according to the plurality of types of content corresponding to each first image group in the plurality of first image groups.
In this embodiment, the content removal module 430 may be specifically configured to: and in response to the removal instruction, removing the specified type of content existing in each captured image of the at least one first image group to obtain at least one second image group.
In some embodiments, referring to fig. 9, the content removal module 430 may include: a quantity judging unit 433, a content presenting unit 434, and a removal performing unit 432. The number judging unit 433 is configured to judge that the number of target captured images in the at least one first image group is smaller than a first set threshold, where the target captured images are captured images in which the specified content exists; the content prompting unit 434 is configured to send a prompting content to the electronic device if the content is smaller than the first set threshold, where the prompting content is used to prompt whether to remove the specified content existing in the captured image of the at least one first image group; the removal execution unit 432 is configured to remove the specified content existing in the captured image of at least one of the first image group upon receiving the determination instruction.
In some embodiments, referring to fig. 10, the image grouping module 420 may include: an image screening unit 421 and a grouping execution unit 422. The image screening unit 421 is configured to acquire a captured image satisfying a screening condition including an image captured by a camera of a specified area or an image captured in a specified time period from captured images of the plurality of cameras; the grouping execution unit 422 is configured to group the captured images satisfying the filtering condition according to different moving objects according to moving objects existing in the captured images satisfying the filtering condition, so as to obtain a plurality of first image groups.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
To sum up, the scheme that this application provided is applied to the server, server and a plurality of camera communication connection, and a plurality of cameras distribute in different positions, and the shooting region of two adjacent cameras in a plurality of cameras is adjoined or has some coincidences. The method comprises the steps of obtaining shot images of a plurality of cameras, grouping the shot images of the plurality of cameras according to different moving objects according to the moving objects existing in the shot images to obtain a plurality of first image groups, wherein the first image groups are a set of the shot images containing the same moving object, removing specified contents existing in the shot images of at least one first image group when a removal instruction for removing the specified contents of the shot images of at least one first image group is received to obtain at least one second image group, splicing and synthesizing the shot images of at least one second image group according to the shooting time sequence of the shot images to obtain a video file corresponding to at least one moving object, so that the shot images of the moving object in a plurality of shooting areas are spliced and synthesized to obtain a complete monitoring video of the moving object shot in the plurality of shooting areas, the monitoring effect is improved, the designated content in the video file is removed, the interference of the designated content to the user is avoided, the user does not need to check the shot images of all the shooting areas independently, and the user experience is prompted.
Referring to fig. 11, a block diagram of a server according to an embodiment of the present disclosure is shown. The server 100 may be a cloud server or a conventional server. The server 100 in the present application may include one or more of the following components: a processor 110, a memory 120, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall server 100 using various interfaces and lines, performs various functions of the server 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the server 100 in use (such as phone books, audio and video data, chat log data), and the like.
Referring to fig. 12, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 800 has stored therein a program code that can be called by a processor to execute the method described in the above-described method embodiments.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (9)

1. A picture processing method is applied to a server, the server is in communication connection with a plurality of cameras, the plurality of cameras are distributed at different positions, and shooting areas of two adjacent cameras in the plurality of cameras are adjacent or partially overlapped, and the method comprises the following steps:
acquiring shot images of the plurality of cameras;
according to moving objects existing in the shot images, grouping the shot images of the multiple cameras according to different moving objects to obtain multiple first image groups, wherein the first image groups are a set of shot images containing the same moving object, and the moving objects corresponding to each first image group are different;
when a removal instruction for removing specified content from at least one captured image of the first image group is received, judging that the number of target captured images in the at least one first image group is smaller than a first set threshold, wherein the target captured images are captured images with the specified content;
if the first set threshold value is smaller than the first set threshold value, sending prompt content to the electronic equipment, wherein the prompt content is used for prompting whether the specified content existing in the shot image of the at least one first image group is removed;
when a determination instruction is received, removing the specified content existing in the shot image of at least one first image group to obtain at least one second image group;
and splicing and synthesizing the shot images of the at least one second image group according to the shooting time sequence of the shot images to obtain a video file corresponding to the at least one moving object.
2. The method according to claim 1, wherein before said removing of the specified content existing in at least one of the captured images of the first image group when the removal instruction for removing the specified content is received for at least one of the captured images of the first image group, the method further comprises:
transmitting at least one captured image of each of the plurality of first image groups to an electronic device;
and receiving a removal instruction sent by the electronic equipment, wherein the removal instruction is sent by the electronic equipment when the selection interface for selecting the removal content is detected to select the content of the target area in the shot image corresponding to at least one first image group in the selection interface after the selection interface for selecting the removal content is displayed according to at least one shot image of each first image group in the plurality of first image groups.
3. The method according to claim 2, wherein said removing the specified contents existing in the captured image of at least one of the first image groups comprises:
responding to the removal instruction, and identifying the content of the target area to obtain the characteristic information of the content of the target area;
and removing the content matched with the characteristic information in each shot image of the at least one first image group based on the characteristic information to obtain at least one second image group.
4. The method according to claim 1, wherein the specified content includes a specified type of content, and before the specified content existing in the captured images of at least one of the first image groups is removed when the removal instruction to remove the specified content is received for the captured images of at least one of the first image groups, the method further comprises:
identifying a plurality of types of contents in which the captured image of each of the plurality of first image groups exists;
sending the plurality of types of content corresponding to each first image group in the plurality of first image groups to the electronic equipment;
and receiving a removal instruction sent by the electronic equipment, wherein the removal instruction is sent by the electronic equipment when the selection operation of the appointed type content corresponding to at least one first image group in the selection interface is detected after the selection interface for selecting the removal content is displayed according to the plurality of types of content corresponding to each first image group in the plurality of first image groups.
5. The method according to claim 4, wherein said removing the specified contents existing in the captured image of at least one of the first image groups comprises:
and in response to the removal instruction, removing the specified type of content existing in each captured image of the at least one first image group to obtain at least one second image group.
6. The method according to any one of claims 1 to 5, wherein the grouping the captured images of the plurality of cameras according to different moving objects according to the moving objects existing in the captured images to obtain a plurality of first image groups comprises:
acquiring shot images meeting screening conditions including images shot by cameras in a specified area or images shot in a specified time period from the shot images of the plurality of cameras;
and according to moving objects existing in the shot images meeting the screening conditions, grouping the shot images meeting the screening conditions according to different moving objects to obtain a plurality of first image groups.
7. The picture processing device is applied to a server, the server is in communication connection with a plurality of cameras, the plurality of cameras are distributed at different positions, shooting areas of two adjacent cameras in the plurality of cameras are adjacent or partially overlapped, and the device comprises: an image acquisition module, an image grouping module, a content removal module, and a video composition module, wherein,
the image acquisition module is used for acquiring shot images of the cameras;
the image grouping module is used for grouping the shot images of the cameras according to different moving objects according to the moving objects existing in the shot images to obtain a plurality of first image groups, wherein the first image groups are a set of shot images containing the same moving object, and the moving objects corresponding to each first image group are different;
the content removing module comprises a quantity judging unit, a content prompting unit and a removing executing unit, the number judgment unit is configured to, when a removal instruction to remove specified content from the captured images of at least one of the first image groups is received, judging that the number of the target captured images in the at least one first image group is smaller than a first set threshold, the target shooting image is a shooting image with the specified content, the content prompting unit is used for sending prompting content to the electronic equipment if the target shooting image is smaller than a first set threshold value, the prompt contents prompt whether to remove the specified contents existing in the captured images of the at least one first image group, the removal execution unit is used for removing the specified content existing in the shot images of at least one first image group when a determination instruction is received, so as to obtain at least one second image group;
and the video synthesis module is used for splicing and synthesizing the shot images of the at least one second image group according to the shooting time sequence of the shot images to obtain a video file corresponding to the at least one moving object.
8. A server, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-6.
9. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 6.
CN201910579270.5A 2019-06-28 2019-06-28 Image processing method, image processing apparatus, server, and storage medium Active CN110267009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910579270.5A CN110267009B (en) 2019-06-28 2019-06-28 Image processing method, image processing apparatus, server, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910579270.5A CN110267009B (en) 2019-06-28 2019-06-28 Image processing method, image processing apparatus, server, and storage medium

Publications (2)

Publication Number Publication Date
CN110267009A CN110267009A (en) 2019-09-20
CN110267009B true CN110267009B (en) 2021-03-12

Family

ID=67923273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910579270.5A Active CN110267009B (en) 2019-06-28 2019-06-28 Image processing method, image processing apparatus, server, and storage medium

Country Status (1)

Country Link
CN (1) CN110267009B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320782B (en) * 2019-12-18 2024-03-26 荣耀终端有限公司 Control method, electronic equipment, computer readable storage medium and chip
CN114697723B (en) * 2020-12-28 2024-01-16 北京小米移动软件有限公司 Video generation method, device and medium
CN113420170B (en) * 2021-07-15 2023-04-14 宜宾中星技术智能系统有限公司 Multithreading storage method, device, equipment and medium for big data image

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1658670A (en) * 2004-02-20 2005-08-24 上海银晨智能识别科技有限公司 Intelligent tracking monitoring system with multi-camera
CN105513030A (en) * 2014-09-24 2016-04-20 联想(北京)有限公司 Information processing method and apparatus, and electronic equipment
CN106210542A (en) * 2016-08-16 2016-12-07 深圳市金立通信设备有限公司 The method of a kind of photo synthesis and terminal
CN106600525A (en) * 2016-12-09 2017-04-26 宇龙计算机通信科技(深圳)有限公司 Picture fuzzy processing method and system
CN107358146A (en) * 2017-05-22 2017-11-17 深圳云天励飞技术有限公司 Method for processing video frequency, device and storage medium
CN108540754A (en) * 2017-03-01 2018-09-14 中国电信股份有限公司 Methods, devices and systems for more video-splicings in video monitoring
CN109726716A (en) * 2018-12-29 2019-05-07 深圳市趣创科技有限公司 A kind of image processing method and system
CN109801211A (en) * 2018-12-19 2019-05-24 中德(珠海)人工智能研究院有限公司 A kind of object removing method based on panorama camera

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002243656A (en) * 2001-02-14 2002-08-28 Nec Corp Method and equipment for visual inspection
US9525802B2 (en) * 2013-07-24 2016-12-20 Georgetown University Enhancing the legibility of images using monochromatic light sources
US20170193644A1 (en) * 2015-12-30 2017-07-06 Ebay Inc Background removal
CN108737875B (en) * 2017-04-13 2021-08-17 北京小度互娱科技有限公司 Image processing method and device
CN109308516A (en) * 2017-07-26 2019-02-05 华为技术有限公司 A kind of method and apparatus of image procossing
CN109886130B (en) * 2019-01-24 2021-05-28 上海媒智科技有限公司 Target object determination method and device, storage medium and processor

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1658670A (en) * 2004-02-20 2005-08-24 上海银晨智能识别科技有限公司 Intelligent tracking monitoring system with multi-camera
CN105513030A (en) * 2014-09-24 2016-04-20 联想(北京)有限公司 Information processing method and apparatus, and electronic equipment
CN106210542A (en) * 2016-08-16 2016-12-07 深圳市金立通信设备有限公司 The method of a kind of photo synthesis and terminal
CN106600525A (en) * 2016-12-09 2017-04-26 宇龙计算机通信科技(深圳)有限公司 Picture fuzzy processing method and system
CN108540754A (en) * 2017-03-01 2018-09-14 中国电信股份有限公司 Methods, devices and systems for more video-splicings in video monitoring
CN107358146A (en) * 2017-05-22 2017-11-17 深圳云天励飞技术有限公司 Method for processing video frequency, device and storage medium
CN109801211A (en) * 2018-12-19 2019-05-24 中德(珠海)人工智能研究院有限公司 A kind of object removing method based on panorama camera
CN109726716A (en) * 2018-12-29 2019-05-07 深圳市趣创科技有限公司 A kind of image processing method and system

Also Published As

Publication number Publication date
CN110267009A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN110267008B (en) Image processing method, image processing apparatus, server, and storage medium
CN110267009B (en) Image processing method, image processing apparatus, server, and storage medium
JP7266672B2 (en) Image processing method, image processing apparatus, and device
TWI435279B (en) Monitoring system, image capturing apparatus, analysis apparatus, and monitoring method
WO2020057355A1 (en) Three-dimensional modeling method and device
CN110191324B (en) Image processing method, image processing apparatus, server, and storage medium
US9282238B2 (en) Camera system for determining pose quality and providing feedback to a user
CN111163259A (en) Image capturing method, monitoring camera and monitoring system
KR101514061B1 (en) Wireless camera device for managing old and weak people and the management system thereby
CN109376601B (en) Object tracking method based on high-speed ball, monitoring server and video monitoring system
CN107395957B (en) Photographing method and device, storage medium and electronic equipment
CN107360366B (en) Photographing method and device, storage medium and electronic equipment
CN107465856B (en) Image pickup method and device and terminal equipment
CN111836052B (en) Image compression method, image compression device, electronic equipment and storage medium
US20150116471A1 (en) Method, apparatus and storage medium for passerby detection
CN106791703B (en) The method and system of scene is monitored based on panoramic view
CN110267011B (en) Image processing method, image processing apparatus, server, and storage medium
CN110266953B (en) Image processing method, image processing apparatus, server, and storage medium
JP2014042160A (en) Display terminal, setting method of target area of moving body detection and program
CN115086567A (en) Time-delay shooting method and device
CN113411498A (en) Image shooting method, mobile terminal and storage medium
CN107977437B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN106888353A (en) A kind of image-pickup method and equipment
US20130308829A1 (en) Still image extraction apparatus
CN109040654B (en) Method and device for identifying external shooting equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant