CN113055586A - Video processing method and device - Google Patents

Video processing method and device Download PDF

Info

Publication number
CN113055586A
CN113055586A CN201911380674.8A CN201911380674A CN113055586A CN 113055586 A CN113055586 A CN 113055586A CN 201911380674 A CN201911380674 A CN 201911380674A CN 113055586 A CN113055586 A CN 113055586A
Authority
CN
China
Prior art keywords
video
identification information
person
camera
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911380674.8A
Other languages
Chinese (zh)
Inventor
聂兰龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Qianyan Feifeng Information Technology Co ltd
Original Assignee
Qingdao Qianyan Feifeng Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Qianyan Feifeng Information Technology Co ltd filed Critical Qingdao Qianyan Feifeng Information Technology Co ltd
Priority to CN201911380674.8A priority Critical patent/CN113055586A/en
Priority to PCT/CN2020/134645 priority patent/WO2021129382A1/en
Publication of CN113055586A publication Critical patent/CN113055586A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a video processing method and a video processing device. Wherein, the method comprises the following steps: receiving a trigger signal, wherein the trigger signal is used for triggering a camera device to shoot, the camera device comprises a driving device and a camera, and the driving device is used for driving the camera in the camera device to move; the camera is used for shooting videos in the moving process; acquiring a video shot by a camera under the trigger of a trigger signal; extracting the identity identification information corresponding to the person in the video, and storing the corresponding relation between the identity identification information and the video; acquiring the identity recognition information of the video to be extracted, and searching one or more videos corresponding to the identity recognition information of the video to be extracted in the stored videos; and displaying one or more videos to a user corresponding to the identification information of the video to be extracted. The invention solves the technical problem of lower reliability of the video processing mode in the related technology.

Description

Video processing method and device
Technical Field
The invention relates to the technical field of video processing, in particular to a video processing method and device.
Background
At present, the video is acquired mainly by using a camera, a mobile phone, a video camera, an unmanned aerial vehicle and other means, and a method and means for providing high-quality video service are lacked.
High-quality video needs auxiliary shooting means besides video acquisition equipment with good imaging effect. For example, a video acquired by using a traditional fixed camera has a single background and a single dead space, lacks a dynamic effect, and cannot realize ideal effect presentation. For example, when shooting a movie, a photographer usually needs to set up facilities such as a rail car and a shooting cantilever, so as to achieve a desired effect presentation.
If facilities such as a rail car and a shooting cantilever for professional movie shooting are transplanted to a scenic spot to provide video acquisition services for tourists, on one hand, photographers need to be equipped to provide manual services, on the other hand, a large number of video clips are generated and are selected through manual comparison, and then the generated videos are delivered to the shot users. The manual comparison is inefficient in selecting videos, and if the number of generated videos exceeds the manual selection workload, the service effect is affected.
In view of the above-mentioned problem of low reliability of video processing in the related art, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a video processing method and a video processing device, which at least solve the technical problem of low reliability of a video processing mode in the related technology.
According to an aspect of an embodiment of the present invention, there is provided a video processing method, including: receiving a trigger signal, wherein the trigger signal is used for triggering a camera device to shoot, the camera device comprises a driving device and a camera, and the driving device is used for driving the camera in the camera device to move; the camera is used for shooting videos in the moving process; acquiring a video shot by the camera under the trigger of the trigger signal; and extracting the identification information corresponding to the people in the video from the video, and storing the corresponding relation between the identification information and the video.
Optionally, before receiving the trigger signal, the video processing method further includes: sensing that a person appears in a shooting range of the camera device through a sensor arranged on the camera device; and responding to the sensing of the sensor and sending out the trigger signal.
Optionally, the sensor is at least one of: the device comprises an infrared induction unit, a radio frequency induction unit and a radar detection unit.
Optionally, before receiving the trigger signal, the video processing method further includes: receiving an operation of a switch in the image pickup device by a user; issuing the trigger signal in response to the operation.
Optionally, the switch is at least one of: the device comprises a key switch unit, a touch switch unit and a photoelectric switch unit.
Optionally, before receiving the trigger signal, the video processing method further includes: receiving the operation of a user on the software interface; and responding to the operation, and sending the trigger signal to the camera device through a network.
Optionally, before receiving an operation of a user on the software interface, the video processing method further includes: acquiring identification information of the camera device by scanning a graphical code arranged on the camera device; and displaying the operation which can be carried out on the camera device on the software interface according to the identification information.
Optionally, after extracting the identification information corresponding to the person in the video from the video and storing the corresponding relationship between the identification information and the video, the method further includes: acquiring identity identification information of a video to be extracted, and searching one or more videos corresponding to the identity identification information of the video to be extracted in stored videos; and displaying the one or more videos to a user corresponding to the identification information of the video to be extracted.
Optionally, before receiving an operation of a user on the software interface, the video processing method further includes: acquiring the geographical position information of the handheld device under the condition that the handheld device of the person is displayed on the software interface; and displaying the camera device which can be controlled by the person within a preset range from the handheld equipment on the software interface according to the geographic position information, and performing operation on the camera device.
Optionally, extracting, from the video, identification information corresponding to a person in the video includes: extracting the induced radio frequency identification information from the radio frequency signal; the radio frequency identification information is used as identity identification information for identifying the person; extracting identity identification information for identifying the person from the network trigger signal; the method for acquiring the identity identification information of the video to be extracted comprises the following steps: acquiring induced radio frequency identification information extracted from the radio frequency signal, and determining the radio frequency identification information as the identity information of the person; and extracting the identity identification information for identifying the person from the network trigger signal.
Optionally, extracting, from the video, identification information corresponding to a person in the video includes: identifying from the person an attachment on the person and/or a biometric of the person; using the characteristic information of the attachment and/or the characteristic information of the biological characteristic as identification information for identifying the person; the method for acquiring the identity identification information of the video to be extracted comprises the following steps: acquiring feature information of the attachment of the person and/or identity information of the biological feature of the person, and determining the feature information of the attachment and/or the feature information corresponding to the biological feature as the identity information of the person; wherein the attachment comprises at least one of: apparel, accessories, hand held articles; the biometric characteristic comprises at least one of: facial features, posture features; the attachment is for uniquely identifying the person in a predetermined area.
Optionally, when the number of people in the video is multiple, extracting, from the video, the identification information corresponding to the people in the video includes: identifying attachments and/or biometrics on each person from the plurality of persons; and determining the characteristic information of the attachment on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each person in the plurality of persons, and storing the corresponding relation between the identification information of each person in the plurality of persons and the video.
Optionally, the storing the correspondence between the identification information of each of the plurality of persons and the video includes: determining a time node at which the identification information of each of the plurality of people is identified in the video; taking the time node as a time tag of the identification information of each person in the plurality of persons; and storing the identification information of each of the plurality of persons and the corresponding relation between the time labels added to the identification information of each of the plurality of persons and the video.
Optionally, the moving track of the camera includes at least one of: the target object tracking system comprises a reciprocating movement track between a preset starting point and a preset end point, a circulating movement track of a preset path, a movement track designed based on a preset programming program and a tracking movement track following the target object.
Optionally, the moving mode of the camera is at least one of the following: orbital movement, rotational movement.
Optionally, the driving manner of the driving device is at least one of the following: mechanical drive, electromagnetic drive and pressure drive.
According to another aspect of the embodiments of the present invention, there is also provided a video processing apparatus including: the device comprises a first receiving unit, a second receiving unit and a control unit, wherein the first receiving unit is used for receiving a trigger signal, the trigger signal is used for triggering a camera device to shoot, the camera device comprises a driving device and a camera, and the driving device is used for driving the camera in the camera device to move; the camera is used for shooting videos in the moving process; the first acquisition unit is used for acquiring a video shot by the camera under the trigger of the trigger signal; and the extraction unit is used for extracting the identification information corresponding to the people in the video from the video and storing the corresponding relation between the identification information and the video.
Optionally, the video processing apparatus further includes: the sensing unit is used for sensing that a person is in the shooting range of the camera device through a sensor arranged on the camera device before the triggering signal is received; and the first response unit is used for responding to the induction of the sensor and sending out the trigger signal.
Optionally, the sensor is at least one of: the device comprises an infrared induction unit, a radio frequency induction unit and a radar detection unit.
Optionally, the video processing apparatus further includes: a second receiving unit configured to receive an operation of a switch in the image pickup apparatus by a user before receiving the trigger signal; a second response unit for issuing the trigger signal in response to the operation.
Optionally, the switch is at least one of: the device comprises a key switch unit, a touch switch unit and a photoelectric switch unit.
Optionally, the video processing apparatus further includes: the third receiving unit is used for receiving the operation of a user on the software interface before receiving the trigger signal; and the third response unit is used for responding to the operation and sending the trigger signal to the image pickup device through a network.
Optionally, the apparatus further comprises: the third acquisition unit is used for acquiring the identification information of the camera device by scanning the graphical code arranged on the camera device before receiving the operation of a user on a software interface; and the display unit is used for displaying the operation which can be carried out on the camera device on the software interface according to the identification information.
Optionally, the video processing apparatus further includes: the fourth acquisition unit is used for acquiring the geographical position information of the handheld equipment under the condition that the handheld equipment of the person is displayed on the software interface before the operation of the user on the software interface is received; and the display unit is used for displaying the image pickup device which can be controlled by the person within a preset range from the handheld equipment on the software interface according to the geographical position information and performing operation on the image pickup device.
Optionally, the apparatus further comprises: the second acquisition unit is used for acquiring the identification information of the video to be extracted after the identification information corresponding to the person in the video is extracted from the video and the corresponding relation between the identification information and the video is stored, and searching one or more videos corresponding to the identification information of the video to be extracted from the stored videos; and the display unit is used for displaying the one or more videos to a user corresponding to the identification information of the video to be extracted.
Optionally, the extracting unit is configured to identify, from the person, an attachment on the person and/or a biometric feature of the person; using the characteristic information of the attachment and/or the characteristic information of the biological characteristic as identification information for identifying the person; the second acquiring unit is configured to acquire feature information of an attachment of the person and/or identification information of a biometric feature of the person, and determine the feature information of the attachment and/or the feature information corresponding to the biometric feature as the identification information of the person; wherein the attachment comprises at least one of: apparel, accessories, hand held articles; the biometric characteristic comprises at least one of: facial features, posture features; the sticker is used to uniquely identify the person in a predetermined area.
Optionally, the extracting unit is further configured to extract the induced radio frequency identification information from the radio frequency signal; the radio frequency identification information is used as identity identification information for identifying the person; extracting identity identification information for identifying the person from the network trigger signal; the second obtaining unit is further configured to obtain radio frequency identification information extracted and sensed from the radio frequency signal, and determine the radio frequency identification information as the identification information of the person; and extracting the identity identification information for identifying the person from the network trigger signal.
Optionally, in a case where the number of people in the video is plural, the extracting unit includes: a first identification module for identifying an attachment and/or biometric characteristic on each of the plurality of persons; the first storage module is used for determining the characteristic information of the attachment on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each person in the plurality of persons and storing the corresponding relation between the identification information of each person in the plurality of persons and the video.
Optionally, the first saving module comprises: a first determining submodule, configured to determine a time node at which identification information of each of the plurality of persons is identified in the video; the second determining submodule is used for taking the time node as a time label of the identity identification information of each person in the plurality of persons; and the storage submodule is used for storing the identification information of each person in the plurality of persons and the corresponding relation between the time labels added to the identification information of each person in the plurality of persons and the video.
Optionally, the moving track of the camera includes at least one of: the target object tracking system comprises a reciprocating movement track between a preset starting point and a preset end point, a circulating movement track of a preset path, a movement track designed based on a preset programming program and a tracking movement track following the target object.
Optionally, the moving mode of the camera is at least one of the following: orbital movement, rotational movement.
Optionally, the driving manner of the driving device is at least one of the following: mechanical drive, electromagnetic drive and pressure drive.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein the program executes the video processing method according to any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes to perform the video processing method described in any one of the above.
In the embodiment of the invention, a trigger signal is received, wherein the trigger signal is used for triggering a camera device to shoot, the camera device comprises a driving device and a camera, and the driving device is used for driving the camera in the camera device to move; the camera is used for shooting videos in the moving process; acquiring a video shot by a camera under the trigger of a trigger signal; the method comprises the steps of extracting identification information corresponding to people in a video, storing the corresponding relation between the identification information and the video, and achieving the purposes of storing the video containing a user and identification information of people identified from the video in advance, and inquiring the video of the user from a pre-stored video library according to the identification information of the user through the video processing method.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flowchart of a video processing method according to a first embodiment of the present application;
FIG. 2 is a schematic view of an application scenario of a camera according to a first embodiment of the present application;
FIG. 3 is a schematic diagram of a camera according to a first embodiment of the present application;
FIG. 4 is a diagram illustrating a first application scenario of a camera according to a second embodiment of the present application;
FIG. 5 is a schematic diagram of a second application scenario of the photographing apparatus according to the second embodiment of the present application;
FIG. 6 is a schematic diagram of a camera according to the second embodiment of the present application;
fig. 7 is a first schematic view of an application scenario of a shooting device according to a third embodiment of the present application;
fig. 8 is a schematic view of an application scenario of a camera according to a third embodiment of the present application;
fig. 9 is a first schematic view of an application scenario of a camera according to a fourth embodiment of the present application;
fig. 10 is a schematic view of an application scenario of a camera according to a fourth embodiment of the present application;
fig. 11 is a schematic view of a photographing apparatus according to a fourth embodiment of the present application;
fig. 12 is a schematic view of an application scenario of a camera according to a fifth embodiment of the present application;
fig. 13 is a schematic view of an application scenario of a camera according to a sixth embodiment of the present application;
FIG. 14 is a schematic diagram of a camera according to the sixth embodiment of the present application;
fig. 15 is a schematic view of an application scenario of a camera according to a seventh embodiment of the present application;
fig. 16 is a schematic view of a photographing apparatus according to a seventh embodiment of the present application;
fig. 17 is a schematic view of an application scenario of a camera according to an eighth embodiment of the present application;
fig. 18 is a schematic view of a photographing apparatus according to an eighth embodiment of the present application;
FIG. 19 is a diagram of an application scenario of a camera according to the ninth embodiment of the present application;
FIG. 20 is a schematic diagram of a camera according to the ninth embodiment of the present application;
fig. 21 is a schematic view of an application scenario of a camera according to a tenth embodiment of the present application;
fig. 22 is a schematic view of a photographing apparatus according to a tenth embodiment of the present application;
fig. 23 is a schematic view of an application scene of a camera according to an eleventh embodiment of the present application;
fig. 24 is a schematic view of a photographing apparatus according to an eleventh embodiment of the present application;
fig. 25 is a schematic view of an application scenario of a camera according to a twelfth embodiment of the present application;
fig. 26 is a schematic view of a photographing apparatus according to a twelfth embodiment of the present application;
fig. 27 is a schematic view of an application scenario of a camera according to the thirteenth embodiment of the present application;
fig. 28 is a schematic view of a photographing apparatus according to a thirteenth embodiment of the present application;
fig. 29 is a schematic view of an application scenario of a camera according to a fourteenth embodiment of the present application;
FIG. 30 is a schematic diagram of a camera according to a fourteenth embodiment of the present application;
fig. 31 is a schematic view of an application scenario of a camera according to the fifteenth embodiment of the present application;
fig. 32 is a schematic view of a camera according to fifteenth embodiment of the present application;
fig. 33 is a diagram illustrating an application scenario of a sixteen shooting device according to an embodiment of the present application;
FIG. 34 is a diagram of a sixteen camera device according to an embodiment of the present application;
FIG. 35 is a schematic view of a seventeenth application scenario of a camera according to the present application;
fig. 36 is a schematic view of a seventeenth application scenario of a camera according to the embodiment of the present application;
FIG. 37 is a schematic view of a camera according to the seventeenth embodiment of the present application; and the number of the first and second groups,
fig. 38 is a schematic device diagram of a video processing apparatus according to an eighteenth embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
In accordance with an embodiment of the present invention, there is provided a method embodiment of a video processing method, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present invention, as shown in fig. 1, the video processing method includes the steps of:
step S102, receiving a trigger signal, wherein the trigger signal is used for triggering a camera device to shoot, the camera device comprises a driving device and a camera, and the driving device is used for driving the camera in the camera device to move; the camera is used for shooting videos in the moving process.
Optionally, the driving mode of the driving device is at least one of the following: mechanical drive, electromagnetic drive and pressure drive. Wherein, the mechanical drive can be realized by a roller, a pull rope, a conveyor belt, a screw rod and the like; the electromagnetic drive can be realized by a linear motor, magnetic suspension and the like; the pressure driving may be implemented by fluid pressure such as hydraulic pressure or air pressure, for example, hydraulic energy, wind energy, a hydraulic pump function, an air pump function, and the like.
Optionally, when the driving device drives the camera in the camera device to move, a certain moving track may be installed to implement the driving, where the moving track of the camera may include at least one of: the target object tracking system comprises a reciprocating movement track between a preset starting point and a preset end point, a circulating movement track of a preset path, a movement track designed based on a preset programming program and a tracking movement track following the target object. That is, the movement trajectory may be a reciprocating movement between a predetermined start point and an end point, may be a circular movement according to a predetermined path, may be a movement that executes a predetermined programmed program, may follow a target person tracking movement, or may be a mixture of the above movement manners; the mixed execution of the moving modes includes, for example, executing a predetermined program after tracking the target person for a fixed time, and tracking the target person for a fixed time after the execution of the predetermined program is finished.
Optionally, the moving mode of the camera is at least one of the following: orbital movement, rotational movement. Wherein, the track can be a slide rail, a telescopic rail or a rope rail; in the rotary movement, the rotary mechanism can be rotated according to a single joint or multi-joint, and can be a rocker arm or a rotating wheel; the movement may be a combination of orbital and rotary movements.
In an optional embodiment, before receiving the trigger signal, the video processing method may further include: receiving the operation of a user on the software interface; in response to the operation, a trigger signal is sent to the image pickup apparatus through the network.
In one aspect, before receiving an operation of a user on a software interface, the video processing method may further include: acquiring identification information of the camera device by scanning a graphical code arranged on the camera device; and displaying the operation which can be performed on the camera device on the software interface according to the identification information.
In another aspect, before receiving an operation of a user on a software interface, the video processing method may further include: acquiring geographical position information of the handheld device under the condition that the software interface displays the handheld device of a person; and displaying a camera device which can be controlled by a person within a preset range from the handheld equipment on the software interface according to the geographical position information, and performing operation on the camera device.
For example, the data information acquired by the network and sent by the portable terminal APP contains the user identification and confirms the use of the video capture device. The data information for confirming the use of the video capture device can be generated by inputting the code of the video capture device in the APP application by the user and confirming the use, can be generated by scanning the two-dimensional code mark of the video capture device by the user by using the APP application, and can be generated by the portable terminal APP according to the current positioning information of the portable terminal and the position information of the video capture device reaching the preset condition.
And step S104, acquiring the video shot by the camera under the trigger of the trigger signal.
In an optional embodiment, before receiving the trigger signal, the video processing method may further include: sensing that a person appears in a shooting range of the camera device through a sensor arranged on the camera device; and sending out a trigger signal in response to the sensing of the sensor.
Optionally, the sensor is at least one of the following: the device comprises an infrared induction unit, a radio frequency induction unit and a radar detection unit.
That is, in the present application, the sensor for triggering the image pickup device may be: the infrared sensing unit responsive to the infrared rays of the human body may be a radio frequency sensing unit configured to respond to a radio frequency signal, and may be a radar detection unit responsive to a moving object. In the embodiment of the present application, the radio frequency induction unit may be a radio frequency identification RFID card.
In another alternative embodiment, before receiving the trigger signal, the video processing method may further include: receiving an operation of a user on a switch in the camera device; in response to the operation, a trigger signal is issued.
Optionally, the switch is at least one of: the device comprises a key switch unit, a touch switch unit and a photoelectric switch unit.
That is, in the present application, the sensor for triggering the image pickup device may be: the switch unit may be an electro-optical switch unit responding to the passage of a human body, may be a key switch unit responding to the pressing of a human body, and may be a touch switch unit responding to the touch of a human body.
In addition, the feature triggering unit for triggering the starting of the camera device may also be a feature triggering unit responding to gesture information, mouth shape information and body shape information of a person, and may be a command switch responding to a network signal, for example, command information sent by the portable terminal APP through a network, the command information may be generated by selecting a nearby video capture device operation in the APP, may be a code input into the video capture device in the APP or a two-dimensional code scanned by using the APP, and may be generated by the APP acquiring the positioning coordinates of the terminal in an area set by the mobile video acquisition device.
And S106, extracting the identification information corresponding to the person in the video, and storing the corresponding relation between the identification information and the video.
Optionally, extracting the identification information corresponding to the person in the video may include: identifying from the person an attachment on the person and/or a biometric characteristic of the person; the characteristic information of the attached matter and/or the characteristic information of the biometrics characteristic are used as identification information for identifying the person.
In addition, in this application embodiment, the obtained video and the identification information identified from the video are correspondingly stored, and the support system can correspondingly extract the video according to the corresponding storage, so that the tourist can conveniently obtain the video clip containing the tourist.
Furthermore, the obtained video and the identification information identified from the video are stored correspondingly, the video file can be mapped in a database and stored correspondingly with the identification identifier, and the video file can also contain abstract information of the identification identifier.
As can be seen from the above, in the embodiment of the application, a trigger signal may be received, where the trigger signal is used to trigger a camera device to shoot, the camera device includes a driving device and a camera, and the driving device is used to drive the camera in the camera device to move; the camera is used for shooting videos in the moving process; acquiring a video shot by a camera under the trigger of a trigger signal; the identification information corresponding to the person in the video is extracted, and the corresponding relation between the identification information and the video is stored, so that the aim of correspondingly storing the video containing the user and the identification information of the person identified from the video in advance is fulfilled.
It is easy to notice that after the video is obtained by shooting, the video is identified to obtain the identification information of the character in the video, and the video and the identification information obtained by identification are correspondingly stored, so that when a user requests to obtain the video of the user, the video consistent with the identification information of the user can be called from the video library in which the video is stored according to the identification information of the user, the purpose of storing the video containing the user and the identification information of the character identified from the video in advance correspondingly and inquiring the video of the user from the pre-stored video library according to the identification information of the user is realized, the technical effect of improving the reliability of a video processing mode is achieved, and the user experience is improved.
Therefore, the video processing method solves the technical problem of low reliability of a video processing mode in the related technology.
In an optional embodiment, after extracting the identification information corresponding to the person in the video from the video and storing the correspondence between the identification information and the video, the video processing method further includes: acquiring the identity recognition information of the video to be extracted, and searching one or more videos corresponding to the identity recognition information of the video to be extracted in the stored videos; and displaying one or more videos to a user corresponding to the identification information of the video to be extracted.
Optionally, the obtaining of the identification information of the video to be extracted may include: acquiring characteristic information of attachments of a person and/or identification information of biological characteristics of the person, and determining the characteristic information of the attachments and/or the characteristic information corresponding to the biological characteristics as the identification information of the person; wherein the attachment comprises at least one of: apparel, accessories, hand held articles; the biometric characteristic includes at least one of: facial features, posture features; the attachment serves to uniquely identify the person in a predetermined area.
In addition, when the number of people in the video is multiple, extracting the identification information corresponding to the people in the video may include: identifying attachments and/or biometric features on each person from the plurality of persons; and determining the characteristic information of the attachment on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each person in the plurality of persons, and storing the corresponding relation between the identification information of each person in the plurality of persons and the video.
In an alternative embodiment, the storing the correspondence between the identification information of each of the plurality of persons and the video includes: determining a time node at which the identification information of each of the plurality of persons is identified in the video; the time node is used as a time label of the identity identification information of each person in the plurality of persons; and storing the identification information of each of the plurality of persons and the corresponding relation between the time tag added to the identification information of each of the plurality of persons and the video.
In another optional embodiment, the extracting, from the video, the identification information corresponding to the person in the video includes: extracting the induced radio frequency identification information from the radio frequency signal; using the radio frequency identification information as identity identification information for identifying the person; extracting identity identification information for identifying the person from the network trigger signal; the method for acquiring the identity identification information of the video to be extracted comprises the following steps: acquiring radio frequency identification information extracted and sensed from a radio frequency signal, and determining the radio frequency identification information as identity identification information of a person; identity information identifying the person is extracted from the network trigger signal.
For example, extracting the identification information corresponding to the person in the video from the video may be implemented by an identification unit, which may be a radio frequency identification unit, such as an RFID card, that obtains the person to carry.
For another example, extracting the identification information corresponding to the person in the video from the video may be implemented by an identification unit, and the identification unit may be network-acquired identification information, for example, data information that is acquired by a network and sent by the portable terminal APP and contains the user identification and confirms use of the video capture device. The data information for confirming the use of the video capture device can be generated by inputting the code of the video capture device in the APP application by the user and confirming the use, can be generated by scanning the two-dimensional code mark of the video capture device by the user by using the APP application, and can be generated by the portable terminal APP according to the current positioning information of the portable terminal and the position information of the video capture device reaching the preset condition.
Fig. 2 is a schematic view of an application scene of a shooting device according to a first embodiment of the present application, and fig. 3 is a schematic view of a shooting device according to a first embodiment of the present application, as shown in fig. 2 and fig. 3, in this embodiment, a convex rail is adopted for reciprocating motion, a driving manner is adopted for driving by using a roller, and a motion track is linear reciprocating motion.
In this embodiment, there is proximity switch in video capture sports car both sides, has spacing iron plate at the track both ends, and proximity switch senses spacing iron plate when the sports car removes to spacing iron plate department, stops the removal of current direction to start opposite direction and remove. And moving to the position of the limiting iron block at the other end, responding to the induction signal of the proximity switch, stopping the movement in the current direction, and starting the movement in the opposite direction. The video capturing sports car does uninterrupted reciprocating linear motion between the two limiting iron blocks. The video capturing sports car responds to a signal sensed by the human body infrared sensing switch, starts the continuous reciprocating linear motion under the condition of sensing the human body infrared, and executes the operation of obtaining the video; and stopping the uninterrupted reciprocating linear motion under the condition that human body infrared rays are not sensed.
A video capture vehicle (alternatively referred to as a sports car) has a sports wheel mounted thereon and provides wireless connectivity. Proximity switches are arranged at two ends of the sports car, and electricity-taking carbon brushes are arranged at the front and the rear of the sports car; the track both ends are provided with the spacing iron prop that can be sensed by proximity switch, and angle steel track both sides are provided with electrically conductive winding displacement and provide electric power for the sports car.
In an embodiment, the video segment may be generated as follows: and extracting a video clip from the continuously captured video stream, wherein the time point when the RFID radio frequency signal is sensed is taken as a first frame of the video clip, and the time point when the RFID radio frequency signal disappears is taken as an end frame of the video clip. And storing the sensed RFID in the extracted video file abstract. The RFID radio frequency card triggers the sports car to start linear reciprocating movement and acquire video operation through the human body infrared induction switch, so that invalid operation of the device can be avoided, and the device can flexibly carry out movable video capture on tourists at any time. At the rear of the exhibition stand of the exhibition hall, the video capture sports car reciprocates on the corner track to continuously shoot the same-frame video of the tourists and the exhibits.
Example two
Fig. 4 is a first schematic view of an application scenario of a shooting device according to a second embodiment of the present application, fig. 5 is a second schematic view of an application scenario of a shooting device according to the second embodiment of the present application, and fig. 6 is a schematic view of a shooting device according to the second embodiment of the present application, where as shown in fig. 4, fig. 5, and fig. 6, in this embodiment, a concave-shaped circulation track is adopted, a driving method thereof is a synchronous pulley, a motion trajectory thereof is a curve circulation-person tracking, and this embodiment can be applied to a slide, a triggering method adopted is face recognition, and a moving form thereof is a concave-shaped track.
In the embodiment, after triggering, a video capture sports car is started, the target person is tracked from the starting point to move to the end point, and after reaching the end point, the video capture sports car and other video capture sports cars are sequenced and served for other times. The video capture car (or named as a sports car) uses a direct-current micro gear speed reduction motor, a synchronous wheel and a tooth-shaped synchronous belt. The video capture cart may employ a wireless connection.
In the embodiment, the video clip generates the first frame from the start of the video capturing sports car moving, and the video capturing sports car moves to the lowest end of the track to generate the end frame. The video clip does not contain the capture information of the topmost track moving to the end point. And mapping the generated file name of the video file into a database, wherein the video file in the database corresponds to the facial feature identification mark. The facial feature recognition is used in the embodiment, the visitor who recognizes the registered facial features can trigger the shooting mobile vehicle to start to execute the mobile video acquisition operation, the shooting can be automatically tracked, and the effect which cannot be achieved by the conventional shooting means is achieved.
The first video capturing sports car automatically tracks shots as the target person glides down the slide. The second video capturing sports car is in a ready position. Four video capture sports cars are arranged in the track on the other side to wait for servo. The video capture sports car is arranged in the concave track, the sports car is powered by two power taking carbon brushes, the sports car is provided with a direct-current miniature gear reducing motor, and a synchronizing wheel on an output shaft of the reducing motor is meshed with a tooth-shaped synchronous belt adhered in the track. Two conductive flat cables are adhered to one side wall of the track, and power is supplied to the video capturing sports car through a power taking carbon brush of the sports car.
EXAMPLE III
Fig. 7 is a schematic view of a first application scenario of a shooting device according to a third embodiment of the present application, fig. 8 is a schematic view of a second application scenario of a shooting device according to a third embodiment of the present application, and as shown in fig. 7 and fig. 8, in this embodiment, a plurality of video capturing sports cars circularly move on a connection line along a serpentine slit track curve on a ceiling of a T-type show hall for shooting, the embodiment employs a slit-type serpentine track, a driving manner is a roller, a motion trajectory is a curvilinear circulation, and this embodiment can be applied to a T-type show hall. The general control switch is adopted to trigger in the embodiment, and the track can adopt a suspended ceiling crack track.
In this embodiment, after the master control switch is turned on, the video capturing sports car moves circularly along the track curve; and after the master control switch is switched off, the video capturing sports car stops moving.
The identification mode of the embodiment is facial feature identification, and video segments are extracted from a continuously captured video stream, a time point with fixed duration before the time point of identifying facial features is taken as a first frame of the video segment, and a time point with fixed duration after the time point of the first frame is taken as an end frame. And analyzing the abundant change degree of the facial expression of the shot video clip, and correspondingly storing the video clip which is in accordance with the judgment of the abundant change degree of the facial expression and the facial feature identification mark. And mapping the extracted file name of the video file into a database, wherein the video file in the database corresponds to the facial feature identification mark. After the mobile video acquisition device is started, the video is captured continuously, videos with fixed time length before and after the time point of identifying the character facial features are extracted from the continuous videos, and the wonderful moment that the character expression changes abundantly in the videos is recorded.
Example four
Fig. 9 is a first schematic view of an application scenario of a camera according to a fourth embodiment of the present application, fig. 10 is a second schematic view of the application scenario of the camera according to the fourth embodiment of the present application, and fig. 11 is a schematic view of the camera according to the fourth embodiment of the present application, and as shown in fig. 9, fig. 10 and fig. 11, in this embodiment, a video capture device is located at the lowermost end of a hexagonal column in a non-operating state, a target person stands on a lifting platform, and the video capture device is pulled to the uppermost end of the hexagonal column by a pull rope. The movable frame is arranged on the periphery of the hexagonal column and drives the video capturing device to move. This embodiment has adopted pole type octahedral pole, stay cord, the mode that comes and goes of rotation from top to bottom, and this embodiment can use in city park, and the trigger mode of this embodiment is that gravity pushes down, and the mobile mode is the pole setting track, and the drive mode is the adjustable shelf.
In this embodiment, the pull rope between the lifting platform and the video capturing movable frame drives the video capturing movable frame to rotate, for example, the lifting platform is pressed down by gravity to descend, then the video capturing movable frame performs a rotation and upward movement, for example, the lifting platform cancels the gravity to press up, then the video capturing movable frame performs a rotation and downward movement.
In this embodiment, an external network command identifier may be used, for example, the APP registered user scans the mobile video capture device or inputs the device number (APP registration identifier).
The manner in which the video is generated may be as follows: a proximity switch is provided on the upright in response to the video capture being in a lowermost position. The video capture movable frame leaves the lowest position, responds to the signal of the proximity switch, starts to capture the video and generates a first frame of the video clip; the video capture movable frame returns to the lowest position, generates an end frame of the video clip in response to the signal of the proximity switch, and stops the video capture.
And matching the captured video clip with a network trigger identifier, and correspondingly storing the video clip matched with the network trigger identifier and the identifier. And the received network trigger mark is stored in the generated video file abstract.
Sweep the sign indicating number through portable terminal APP and confirm portable acquisition video device, use the device to carry out the removal and acquire video operation, can accomplish to shoot the visual angle and gradually rise to the high altitude visual angle of bowing to the scenic spot general appearance by the horizon visual angle that includes the user, realize the presentation effect that conventional shooting means can't reach.
EXAMPLE five
Fig. 12 is a schematic view of an application scenario of a camera according to a fifth embodiment of the present application, and as shown in fig. 12, the present embodiment may be applied to a climbing ladder, and a rope track is disposed on one side of the climbing ladder. The video capture device is dragged by the pull cord to slide along the cord track. The embodiment can be called as a rope type, in the embodiment, a rope pulling device is used as a driving device, the adopted track is a fixed reciprocating motion, the triggering mode is an RFID radio frequency card, and the moving mode is a rope track.
The two ends of the rope track and the mobile video acquisition device are provided with RFID readers, and the passive RFID radio frequency cards which are within 10m and meet the air interface parameters of the ISO/IEC18000-6 standard 860-960MHz can be read.
The video capture unit is connected with the pull rope, and the top end of the rope track is provided with a pull rope device.
The rope pulling device responds to the radio frequency signal identification received by the RFID reader, the dragging and moving type video acquisition device performs reciprocating movement at two ends of the rope track, and the rope pulling device stops dragging and moving type video acquisition device after the time point responding to the fact that the RFID reader cannot receive the radio frequency signal is fixed.
The rope pulling device reduces the dragging speed in response to the radio frequency information identification received by the RFID reader set on the mobile video acquisition device; and responding to that the RFID reader set on the mobile video acquisition device cannot receive the radio frequency information identification, and the rope pulling device restores the dragging speed.
The embodiment adopts the RFID radio frequency card as identity recognition. In response to receiving the RFID radio frequency information, video capture is performed. And delaying the fixed time to finish executing the video capture in response to the fact that the RFID radio frequency information cannot be received, and generating a video file. The generated file names of the video files are mapped in a database, and the video files in the database correspond to a plurality of RFID radio frequency information containing time tags.
The RFID radio frequency information containing the time tag records the sensed RFID radio frequency information and corresponding time point information, wherein the time point information is the time point information when the radio frequency information is received, and the time point information is the time point information when the radio frequency information cannot be received.
Through the ultrahigh frequency band RFID card, the mobile video acquisition device can capture videos of users within 10m, and the method for reducing the moving speed when the mobile video acquisition device moves to the vicinity of the users enhances the presentation effect of shooting for the users.
EXAMPLE six
Fig. 13 is a schematic view of an application scene of a camera according to a sixth embodiment of the present application, and fig. 14 is a schematic view of a camera according to a sixth embodiment of the present application, as shown in fig. 13 and 14, the present embodiment can be applied to a trampoline sport in which a video capture device tracks up and down movement of a target person on a trampoline for shooting. Two sides of the video capturing device are provided with linear motor induction coils which respectively penetrate through the two tubular magnetic shafts.
The present embodiment can be called a linear motor type, and the driving method is a tubular high-speed servo linear motor, and the track of the linear motor is a linear reciprocating motion. The moving form can be a tubular track, the tubular high-speed servo linear motor responds to video image portrait analysis of the background server, the magnetic axis type linear motor moves to a position corresponding to the head height of a person after the person enters the target area, and the head height of the person is tracked and moved in response to the video image portrait analysis.
The adopted triggering mode can be an external network instruction mode, for example, the portable terminal APP detects that the positioning coordinate is located in the predetermined area and sends a triggering instruction to the mobile video acquisition device end in the predetermined area. And if the portable terminal positioning coordinate is located in the setting area of the mobile video acquisition device, the APP registration identifier can be acquired, and the identifier is used as an identity identification identifier. And extracting a video clip from the continuously captured video stream, wherein a time point of a fixed time length after a time point of the human being entering the target area in the video image analysis is taken as a first frame of the video clip, and a time point of a fixed time length before the time point of the human being leaving the target area in the video image analysis is taken as an end frame of the video clip.
And carrying out slow shot effect processing on the extracted video clips, and correspondingly storing the extracted video clips and the facial feature identification marks. And mapping the generated file name of the video file in a database, wherein the video file in the database corresponds to the network trigger identifier.
The high-speed shooting device is driven by the high-speed servo linear motor, the tracking movement is carried out in response to the movement state of a person, and the effect that the conventional shooting cannot be carried out is achieved through the slow lens effect.
In the following embodiments, facial recognition is adopted as the identification means in the seventh to tenth embodiments and thirteenth to seventeenth embodiments, an RFID radio frequency card is adopted as the identification means in the eleventh embodiment, and feature recognition of a chest card image is adopted as the identification means in the twelfth embodiment.
EXAMPLE seven
Fig. 15 is a schematic view of an application scenario of a photographing apparatus according to a seventh embodiment of the present application, and fig. 16 is a schematic view of a photographing apparatus according to a seventh embodiment of the present application, as shown in fig. 15 and fig. 16, this embodiment may be referred to as a magnetic levitation track, which may be applied to a science and technology center, in this embodiment, a magnetic levitation video capture sports car is triggered to perform a straight reciprocating motion photographing by a target person clicking a touch screen. The magnetic suspension video capturing sports car is provided with two sliding plates at the lower part, and magnets are arranged in the sliding plates to enable the video capturing sports car to suspend on the track. In this embodiment, the driving mode is a magnet and an induction coil, the track is a straight reciprocating motion, the triggering mode is a touch switch, and the moving mode is a magnetic suspension track. The magnetic suspension video capture sports car is characterized in that the sliding plates on the two sides of the magnetic suspension video capture sports car are provided with magnets, the magnets with the same magnetism as the magnets of the sports car are arranged on the two sides of the track, and the magnetic suspension video capture sports car is suspended above the track. A horizontal magnetic column is arranged in the middle of the magnetic suspension video capture sports car, a horizontal coil for attracting the horizontal magnetic column is arranged in the middle of the track, and the horizontal coil in the middle of the track drives the horizontal magnetic column of the sports car to move.
The maglev video capturing sports car performs a back and forth movement on the track according to a predetermined programmed trajectory. And responding to the touch switch signal, starting the linear reciprocating movement for a fixed time, and executing the operation of acquiring the video. And extracting a video clip from the continuously captured video stream, wherein a time point of a fixed time length after a time point of the human being entering the target area in the video image analysis is taken as a first frame of the video clip, and a time point of a fixed time length before the time point of the human being leaving the target area in the video image analysis is taken as an end frame of the video clip.
And correspondingly storing the extracted video segments and the facial feature identification marks. And mapping the generated file name of the video file into a database, wherein the video file in the database corresponds to the facial feature identification mark.
Triggering the sports car through the human body infrared induction switch to start the straight line to move back and forth and obtain the video operation, the device can be prevented from carrying out invalid operation, and the tourists can be flexibly carried out movable video capture at any time.
Example eight
Fig. 17 is a schematic view of an application scenario of a photographing device according to an eighth embodiment of the present application, and fig. 18 is a schematic view of a photographing device according to an eighth embodiment of the present application, as shown in fig. 17 and fig. 18, this embodiment may be referred to as a lifting slide way, and this embodiment may be applied to extreme sports where a lifting lead screw slide tracks a target person for moving photographing. The servo motor drives the lead screw to rotate, and drives the video capturing device to move up and down. The driving mode of this embodiment is lead screw and slip table, and its orbit is for following the orbit, and its triggering mode is image analysis, and its form of movement is straight line track.
The lead screw sliding table responds to video image portrait analysis of the background server, the sliding table moves to a position corresponding to the height of the head of a person after the person enters a target area, and the head of the person is tracked and moved in response to the video image portrait analysis. And extracting a video clip from the continuously captured video stream, wherein a time point of a fixed time length after a time point of the human being entering the target area in the video image analysis is taken as a first frame of the video clip, and a time point of a fixed time length before the time point of the human being leaving the target area in the video image analysis is taken as an end frame of the video clip. And correspondingly storing the extracted video segments and the facial feature identification marks. And mapping the extracted file name of the video file into a database, wherein the video file in the database corresponds to the facial feature identification mark. By responding to the movement state of the person to perform tracking movement, the movement process is recorded, and the effect that the conventional shooting cannot be presented is realized.
Example nine
Fig. 19 is a schematic view of an application scene of a camera according to the ninth embodiment of the present application, and fig. 20 is a schematic view of a camera according to the ninth embodiment of the present application, as shown in fig. 19 and fig. 20, this embodiment may be referred to as an embodiment of a hanging wheel method, and this embodiment may be applied to a scenic passageway, and a mobile video capture device installed on the upper portion of a light pole moves up and down to capture a target person. The upper part of the light pole is provided with a spherical radar sensing device, a rope pulling device and a hanging wheel type video capturing device. In the embodiment, the driving mode is a rope puller, the track is in a lifting and returning mode, the triggering mode is a radar inductive switch moving mode in which a lifting rope returns and returns, and after the radar inductive switch detects a moving object, the video capturing device moves up and down along the lifting rope; when the radar inductive switch can not detect the moving object, the video capture device stops moving.
A video segment is extracted from a continuously captured video stream, with a time point of a fixed duration before the time point at which the facial features are identified as a first frame of the video segment, and with a time point of a fixed duration after the time point of the first frame as an end frame. And respectively extracting video segments from different facial feature recognition main bodies, and correspondingly storing the extracted video segments and the facial feature recognition identification. The generated file abstract of the video file corresponds to a plurality of facial feature identification marks containing time labels.
The facial feature identification mark containing the time tag records the identified facial features and corresponding time point information, wherein the time point information is the time point information of identifying the facial features, and the time point information is the time point information of not identifying the facial features. Can accomplish through this embodiment and independently shoot the visual angle and rise gradually to the high altitude visual angle of bowing to the scenic spot general appearance by the horizon visual angle that includes the visitor, realize the presentation effect that conventional shooting means can't reach.
Example ten
Fig. 21 is a schematic view of an application scene of a camera according to a tenth embodiment of the present application, and fig. 22 is a schematic view of a camera according to a tenth embodiment of the present application, as shown in fig. 21 and fig. 22, this embodiment may be referred to as a conveyor belt, and may be applied to a scenic spot where people are dense, and in the scenic spot area, a video capture device on the conveyor belt continuously moves people in the shooting area. The conveying belt is driven by a speed reducing motor and drives a plurality of video capturing devices to shoot.
The driving mode of this embodiment is gear motor and runner, and the orbit is the belt circulation, and the triggering mode is total control switch, and the form of removal is the conveyer belt circulation. After the master control switch is closed, the video capturing device moves up and down along the lifting rope; and after the master control switch is switched off, the video capture device stops moving.
A video segment is extracted from a continuously captured video stream, with a time point of a fixed duration before the time point at which the facial features are identified as a first frame of the video segment, and with a time point of a fixed duration after the time point of the first frame as an end frame. And respectively extracting video segments from different facial feature recognition main bodies, and correspondingly storing the extracted video segments and the facial feature recognition identification.
And mapping the extracted file name of the video file into a database, wherein the video file in the database corresponds to the facial feature identification mark.
A plurality of movable capturing devices are arranged on the conveying belt, so that efficient video moving capturing can be performed on dense areas of people, and multiple angles can be realized on people.
EXAMPLE eleven
Fig. 23 is a schematic view of an application scenario of a camera according to an eleventh embodiment of the present application, and fig. 24 is a schematic view of a camera according to the eleventh embodiment of the present application, as shown in fig. 23 and fig. 24, this embodiment may be referred to as a scissors lifting type, this embodiment may be applied to a temporary scenic spot, and in this embodiment, a video capture device is temporarily installed at the temporary scenic spot to capture a target person from bottom to top. The hydraulic station drives the oil cylinder to drive the lifting device of the scissor fork structure to move up and down in a reciprocating manner.
The driving mode of this embodiment is oil pressure, the orbit is a programming orbit, the triggering mode is a master control switch, and the moving mode is a telescopic lifting structure, such as lifting of scissors forks. After the master control switch is closed, the video capturing device moves up and down along the lifting rope; and after the master control switch is switched off, the video capture device stops moving.
And extracting video clips from the continuously captured video stream, wherein a time point with fixed duration before the time point when the RFID reader on the mobile video acquisition device receives the radio frequency information identifier is taken as a first frame of the video clips, and a time point with fixed duration after the time point when the RFID reader on the mobile video acquisition device cannot receive the radio frequency information identifier is taken as an end frame of the video clips.
And mapping the extracted file name of the video file into a database, wherein the video file in the database corresponds to the RFID radio frequency information identifier.
Through the ultrahigh frequency band RFID card, the mobile video acquisition device can capture videos of users within 10m, can finish the autonomous shooting visual angle and gradually ascend from the horizon visual angle containing tourists to the high-altitude visual angle of the landscape overview of the overlooking scenic region, and enhances the presentation effect of shooting for the users. The mobile video acquisition device can be dragged to a temporary position at any time according to temporary requirements of a scenic spot to execute a video capture task.
Example twelve
Fig. 25 is a schematic view of an application scenario of a camera according to a twelfth embodiment of the present application, and fig. 26 is a schematic view of a camera according to a twelfth embodiment of the present application, as shown in fig. 25 and fig. 26, this embodiment may be referred to as a multi-link rod type telescopic mechanism, and this embodiment may be applied to a scenic spot entrance, in this embodiment, a target person passes through a photoelectric switch sensing area, and in response to a photoelectric switch sensing signal, a cylinder performs an ascending movement to drive a video capture device to capture a target person. The top end of the upright rod is provided with a travel switch, and the travel switch signal triggers the air cylinder to execute descending movement. The embodiment is a power gas system consisting of an air compressor, an oil-water separator, an electromagnetic directional valve and a cylinder, and the power gas system drives a video capture device to move up and down.
In this embodiment, the driving mode is a cylinder, the track is a reciprocating motion, the triggering mode is a photoelectric switch, and the moving mode is a multi-section cylinder. The movable video acquisition device is arranged at the top end of the multi-section cylinder, the pneumatic pipeline electromagnetic directional valve controls the multi-section cylinder to stretch, and the electromagnetic directional valve responds to a turn-off signal of the correlation type photoelectric switch to control the cylinder to stretch; the electromagnetic directional valve responds to a closing signal of the travel switch and controls the cylinder to contract.
The mobile video acquisition device starts to capture a first frame of the video clip in response to the time point of the opening signal of the correlation photoelectric switch, and the time point which is a fixed time after the time point of the closing signal of the travel switch is the end frame of the captured video clip. And correspondingly storing the generated video clip and the feature identification mark of the chest card. And identifying the feature identification of the chest card image in the file abstract of the generated video file.
The entrance of the scenic spot is triggered by the entrance of the tourist to elastically stretch and open the multi-section cylinder to execute the movement to acquire the video operation, so that the autonomous shooting visual angle can be gradually increased to the high-altitude visual angle for overlooking the full view of the scenic spot from the horizon visual angle containing the tourist, and the presentation effect which cannot be achieved by the conventional shooting means can be realized.
EXAMPLE thirteen
Fig. 27 is a schematic view of an application scenario of a camera according to a thirteenth embodiment of the present application, and fig. 28 is a schematic view of a camera according to a thirteenth embodiment of the present application, and as shown in fig. 27 and fig. 28, this embodiment may be referred to as a bidirectional sliding table. The embodiment can be applied to an art corridor, and when a target person passes through the art corridor, the mobile video acquisition device tracks and shoots under the driving of the bidirectional sliding table. The first track is provided with a power supply flat cable and a rack which are respectively meshed with the power taking carbon brush and the first servo motor output shaft gear. The second track is arranged on the first sliding table and provided with a rack which is meshed with the output shaft gear of the second servo motor. A connecting line is arranged between the first servo motor and the second servo motor and wrapped in the protective drag chain. The video capture device is arranged on the second sliding table.
The driving mode of this embodiment is synchronous belt and synchronous wheel, and the orbit is for tracking-programming orbit, and the triggering mode is image analysis, and the form of removal is the slide rail. In this embodiment the mobile video capture device is configured to program a plurality of sets of predetermined trajectories corresponding to a multi-angle shot of an artwork.
The mobile video acquisition device responds to the video image portrait analysis of the background server, executes the preset track programming of the artwork watched by the person or the artwork nearby the person after tracking and acquiring the video of the person activity for a fixed time, and continues responding to the video image portrait analysis of the background server to track the fixed time after the preset track programming execution is finished.
And extracting a video clip from the continuously captured video stream, wherein a time point of a fixed time length after a time point of the human being entering the target area in the video image analysis is taken as a first frame of the video clip, and a time point of a fixed time length before the time point of the human being leaving the target area in the video image analysis is taken as an end frame of the video clip. And respectively extracting video segments from different facial feature recognition main bodies, and correspondingly storing the extracted video segments and the facial feature recognition identification. And mapping the generated file name of the video file into a database, wherein the video file in the database corresponds to the facial feature identification mark.
By tracking the movement of the target person and switching the preset programming track, the effect of natural switching and alternate presentation of the shooting subject between the person and the shot scene can be realized.
Example fourteen
Fig. 29 is a schematic view of an application scenario of a camera according to a fourteenth embodiment of the present application, and fig. 30 is a schematic view of a camera according to a fourteenth embodiment of the present application, as shown in fig. 29 and fig. 30, this embodiment may be referred to as a single-pivot swing arm, this embodiment may be applied to a water park, a video capture device is disposed at a bottom end of a water storage bucket, and if a predetermined water level is reached in the water storage bucket, the video capture device is flipped up along with a shift of a center of gravity of the water storage bucket; after the water in the water storage barrel is emptied, the video capture device moves downwards along with the return of the water storage barrel.
In the present embodiment, the photographing shape may include: the device comprises a water bucket, a rotating shaft, a water valve, a video capturing device and a balance weight, wherein the driving mode in the embodiment is water energy, the track is water pressure reciprocating, the triggering mode is water flow, the moving mode is single-rotating-shaft rotation, under the condition of water flow, the water storage bucket performs repeated swinging relative to the rotating shaft, and the movable video acquiring device at one end of the water storage bucket performs up-and-down circular arc movement by taking the rotating shaft as the center.
And extracting a video clip from the continuously captured video stream, wherein a time point of a fixed time length after a time point of the human being entering the target area in the video image analysis is taken as a first frame of the video clip, and a time point of a fixed time length before the time point of the human being leaving the target area in the video image analysis is taken as an end frame of the video clip. And correspondingly storing the extracted video segments and the facial feature identification marks. And mapping the extracted file name of the video file into a database, wherein the video file in the database corresponds to the facial feature identification mark.
Can accomplish through this embodiment and independently shoot the visual angle and rise gradually to the high altitude visual angle of bowing to the scenic spot general appearance by the horizon visual angle that includes the visitor, realize the presentation effect that conventional shooting means can't reach.
Example fifteen
Fig. 31 is a schematic view of an application scene of a camera according to a fifteenth embodiment of the present application, and fig. 32 is a schematic view of a camera according to a fifteenth embodiment of the present application, as shown in fig. 31 and fig. 32, this embodiment may be called a carousel type, and this embodiment may be applied to a water park, where a plurality of video capturing devices are disposed on a rotating water wheel, and the plurality of video capturing devices perform continuous shooting at the same time. The photographing apparatus of the embodiment may include: waterwheel, water valve, video capture device, counter weight.
The driving mode of this embodiment is hydroenergy, and the orbit is the circulation, and the triggering mode is rivers, and the removal mode is the rotation of single pivot. Under the condition of water flow, the water storage barrel drives the rotating wheel to perform circular motion, and the plurality of movable video acquisition devices arranged on the rotating wheel circularly move relative to the rotating shaft.
And extracting a video clip from the continuously captured video stream, wherein a time point of a fixed time length after a time point of the human being entering the target area in the video image analysis is taken as a first frame of the video clip, and a time point of a fixed time length before the time point of the human being leaving the target area in the video image analysis is taken as an end frame of the video clip. And respectively extracting video segments from different facial feature recognition main bodies, and correspondingly storing the extracted video segments and the facial feature recognition identification. And mapping the extracted file name of the video file into a database, wherein the video file in the database corresponds to the facial feature identification mark.
Can accomplish through this embodiment and independently shoot the visual angle and rise gradually to the high altitude visual angle of bowing to the scenic spot general appearance by the horizon visual angle that includes the visitor, realize the presentation effect that conventional shooting means can't reach.
Example sixteen
Fig. 33 is a schematic view of an application scenario of a sixteen-camera according to an embodiment of the present application, and fig. 34 is a schematic view of a sixteen-camera according to an embodiment of the present application, as shown in fig. 33 and fig. 34, this embodiment may be referred to as a multi-pivot swing arm, and this embodiment may be used for a spiral staircase, and a video capture device tracks the movement of a target person walking on the spiral staircase for shooting. The first stepping motor drives the first rocker arm to rotate, and the second stepping motor drives the second rocker arm to rotate. The second rocker arm rotating shaft is arranged on the first rocker arm.
In this embodiment, the driving mode is a stepping motor, the trajectory is a tracking trajectory, the triggering mode is image analysis, and the movement mode is early joint rotation. The turntable and the rocker arm respond to the video image portrait analysis of the background server, and the rocker arm turntable enables the video acquisition device to move along the head position of the person after the person enters the target area.
The mobile video acquisition device responds to the portrait analysis of the background server on the video image, and extracts a time point with a fixed time length before a time point when the portrait appears in the preset area as a first frame of the video clip, and a time point when the portrait leaves the preset area as an end frame of the video clip. And correspondingly storing the generated video segments and the facial feature identification marks. And mapping the generated file name of the video file into a database, wherein the video file in the database corresponds to the facial feature identification mark.
Through the mobile video acquisition device of the embodiment, the target person is tracked for shooting through the free rotation of the joints, and the effect that the conventional shooting means cannot achieve is achieved.
Example seventeen
Fig. 35 is a schematic view of a seventeenth application scenario of a shooting device according to an embodiment of the present application, fig. 36 is a schematic view of a seventeenth application scenario of a shooting device according to an embodiment of the present application, and fig. 37 is a schematic view of a shooting device according to a seventeenth embodiment of the present application, where as shown in fig. 35, fig. 36, and fig. 37, this embodiment may be referred to as a sliding table rocker arm combination, and this embodiment may be applied to a scenic spot welcome road, where a guest greeting device is disposed on the scenic spot welcome road for waving a flagpole, and the guest greeting device is simultaneously disposed with a video capture device. The welcome device controls the scissors fork mechanism to do telescopic motion on the sliding table through the first cylinder, and controls the rocker arm to do up-and-down swinging motion through the second cylinder. The photographing device in this embodiment may include: the oil-water separator comprises an air pump, an oil-water separator, a first air cylinder, a first electromagnetic directional valve, a second air cylinder, a third electromagnetic directional valve and a fourth electromagnetic directional valve.
The driving mode of this embodiment is the cylinder, and the orbit is random unordered orbit, and the triggering mode is master control switch, and the removal form is that the slip table straight line reciprocates and the single joint is rotatory. The slide table is set to the telescopic control of the first cylinder, and the rocker arm is set to the telescopic control of the second cylinder. The first electromagnetic directional valve controls the extending of the first cylinder, the second electromagnetic directional valve controls the contracting of the first cylinder, and the first electromagnetic directional valve and the second electromagnetic directional valve are set to randomly select a certain time as two directional valve instructions within 5-20 seconds to alternately control the operation time of the first cylinder. For example, a first time sequence of 8 seconds is randomly selected, the first electromagnetic directional valve is used for ventilating the air cylinder, the second electromagnetic directional valve is used for exhausting air outwards, and the sliding table moves towards one end; entering a second time sequence after 8 seconds; and randomly selecting 17 seconds as a second time sequence, executing instruction interchange by the first electromagnetic directional valve and the second electromagnetic directional valve, exhausting air outwards by the first electromagnetic directional valve, ventilating air to the first cylinder by the second electromagnetic directional valve to realize that the sliding table moves towards the opposite direction, and performing a third time sequence after 17 seconds.
The third electromagnetic directional valve controls the extension of the second cylinder, the fourth electromagnetic directional valve controls the contraction of the second cylinder, and the third electromagnetic directional valve and the fourth electromagnetic directional valve are set to randomly select a certain time as two directional valve instructions within 3-12 seconds to alternately control the operation time of the second cylinder. For example, a first time sequence of 5 seconds is randomly selected, the third electromagnetic directional valve ventilates the cylinder, and the fourth electromagnetic directional valve exhausts outwards, so that the rocker arm moves upwards; entering a second time charger after 5 seconds; and randomly selecting 9 seconds as a second time sequence, executing instruction interchange by the third electromagnetic directional valve and the fourth electromagnetic directional valve, exhausting air outwards by the third electromagnetic directional valve, ventilating air to the second cylinder by the fourth electromagnetic directional valve to realize downward movement of the rocker arm, and executing the third time sequence after 9 seconds.
After the master control switch is closed, the video capturing device moves randomly along with the sliding table and the rocker arm; and after the master control switch is switched off, the video capture device stops moving.
A video segment is extracted from a continuously captured video stream, with a time point of a fixed duration before the time point at which the facial features are identified as a first frame of the video segment, and with a time point of a fixed duration after the time point of the first frame as an end frame. And respectively extracting video segments from different facial feature recognition main bodies, and correspondingly storing the extracted video segments and the facial feature recognition identification. And mapping the extracted file name of the video file into a database, wherein the video file in the database corresponds to the facial feature identification mark.
Can accomplish through this embodiment and independently shoot the visual angle and rise gradually to the high altitude visual angle of bowing to the scenic spot general appearance by the horizon visual angle that includes the visitor, realize the presentation effect that conventional shooting means can't reach.
EXAMPLE eighteen
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for performing the video processing method of one of the above embodiments, and fig. 38 is a schematic diagram of a video processing apparatus according to an embodiment of the present application, and as shown in fig. 38, the video processing apparatus includes: a first receiving unit 3801, a first acquiring unit 3803, and an extracting unit 3805. The video processing apparatus will be described in detail below.
The first receiving unit 3801 is configured to receive a trigger signal, where the trigger signal is used to trigger a camera device to perform shooting, the camera device includes a driving device and a camera, and the driving device is used to drive the camera in the camera device to move; the camera is used for shooting videos in the moving process.
A first acquiring unit 3803, configured to acquire a video captured by the camera under the trigger of the trigger signal.
The extracting unit 3805 is configured to extract identity information corresponding to a person in the video, and store a corresponding relationship between the identity information and the video.
It should be noted here that the first receiving unit 3801, the first acquiring unit 3803, and the extracting unit 3805 correspond to steps S102 to S106 in the first embodiment, and the above-mentioned units are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the first embodiment. It should be noted that the above-described elements as part of an apparatus may be implemented in a computer system, such as a set of computer-executable instructions.
As can be seen from the above, in the above embodiments of the present application, the first receiving unit may be used to receive a trigger signal, where the trigger signal is used to trigger the image capturing device to capture an image, the image capturing device includes a driving device and a camera, and the driving device is used to drive the camera in the image capturing device to move; the camera is used for shooting videos in the moving process; then, a first acquisition unit is used for acquiring a video shot by the camera under the trigger of the trigger signal; and then, extracting the identification information corresponding to the person in the video by using an extraction unit, and storing the corresponding relation between the identification information and the video. Through the video processing device, the purpose that the videos including the user and the identification information of the person identified from the videos are stored correspondingly in advance is achieved, and the videos of the user are obtained through inquiring from the pre-stored video library according to the identification information of the user, the technical effect of improving the reliability of the video processing mode is achieved, the user experience is improved, and the technical problem that the reliability of the video processing mode in the related technology is low is solved.
In an alternative embodiment, the video processing apparatus further comprises: the sensing unit is used for sensing that a person is in the shooting range of the camera device through a sensor arranged on the camera device before the triggering signal is received; and the first response unit is used for responding to the induction of the sensor and sending out a trigger signal.
In an alternative embodiment, the sensor is at least one of: the device comprises an infrared induction unit, a radio frequency induction unit and a radar detection unit.
In an alternative embodiment, the video processing apparatus further comprises: a second receiving unit configured to receive an operation of a switch in the image pickup apparatus by a user before receiving the trigger signal; and the second response unit is used for responding to the operation and sending out a trigger signal.
In an alternative embodiment, the switch is at least one of: the device comprises a key switch unit, a touch switch unit and a photoelectric switch unit.
In an alternative embodiment, the video processing apparatus further comprises: the third receiving unit is used for receiving the operation of a user on the software interface before receiving the trigger signal; and the third response unit is used for responding to the operation and sending a trigger signal to the image pickup device through the network.
In an alternative embodiment, the apparatus further comprises: the third acquisition unit is used for acquiring the identification information of the camera device by scanning the graphical code arranged on the camera device before receiving the operation of the user on the software interface; and the display unit is used for displaying the operation which can be carried out on the camera device on the software interface according to the identification information.
In an alternative embodiment, the video processing apparatus further comprises: the fourth acquisition unit is used for acquiring the geographical position information of the handheld equipment under the condition that the handheld equipment of a person is displayed on the software interface before the operation of the user on the software interface is received; and the display unit is used for displaying the camera device which can be controlled by people within a preset range from the handheld equipment on the software interface according to the geographical position information and performing operation on the camera device.
Optionally, the video processing apparatus further comprises: the second acquisition unit is used for acquiring the identification information of the video to be extracted after the identification information corresponding to the person in the video is extracted from the video and the corresponding relation between the identification information and the video is stored, and searching one or more videos corresponding to the identification information of the video to be extracted from the stored videos; and the display unit is used for displaying one or more videos to the user corresponding to the identification information of the video to be extracted.
In an alternative embodiment, the extraction unit is used for identifying attachments on a person and/or biological characteristics of the person from the person; using the characteristic information of the attachment and/or the biological characteristic as identification information for identifying the person; the second acquisition unit is used for acquiring the characteristic information of the attachment of the person and/or the identification information of the biological characteristics of the person and determining the characteristic information of the attachment and/or the characteristic information corresponding to the biological characteristics as the identification information of the person; wherein the attachment comprises at least one of: apparel, accessories, hand held articles; the biometric characteristic includes at least one of: facial features, posture features; the sticker is used to uniquely identify the person in a predetermined area.
Optionally, the extracting unit is further configured to extract the induced radio frequency identification information from the radio frequency signal; using the radio frequency identification information as identity identification information for identifying the person; extracting identity identification information for identifying the person from the network trigger signal; the second acquisition unit is also used for acquiring the radio frequency identification information extracted and sensed from the radio frequency signal and determining the radio frequency identification information as the identity information of a person; identity information identifying the person is extracted from the network trigger signal.
In an alternative embodiment, in the case where the number of persons in the video is plural, the extracting unit includes: a first identification module for identifying an attachment and/or biometric characteristic on each person from a plurality of persons; the first storage module is used for determining the characteristic information of the attachment on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each person in the plurality of persons and storing the corresponding relation between the identification information of each person in the plurality of persons and the video.
In an alternative embodiment, the first saving module comprises: the first determining submodule is used for determining a time node of the identification information of each person in the plurality of persons, which is identified in the video; the second determining submodule is used for taking the time node as a time label of the identity identification information of each person in the plurality of persons; and the storage submodule is used for storing the identification information of each person in the plurality of persons and the corresponding relation between the time tag added to the identification information of each person in the plurality of persons and the video.
In an alternative embodiment, the moving track of the camera comprises at least one of the following: the target object tracking system comprises a reciprocating movement track between a preset starting point and a preset end point, a circulating movement track of a preset path, a movement track designed based on a preset programming program and a tracking movement track following the target object.
In an alternative embodiment, the camera is moved in at least one of the following ways: orbital movement, rotational movement.
In an alternative embodiment, the driving means is driven in at least one of the following ways: mechanical drive, electromagnetic drive and pressure drive.
Example nineteen
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein the program executes the video processing method of any one of the above.
Example twenty
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes a video processing method according to any one of the foregoing methods.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (34)

1. A video processing method, comprising:
receiving a trigger signal, wherein the trigger signal is used for triggering a camera device to shoot, the camera device comprises a driving device and a camera, and the driving device is used for driving the camera in the camera device to move; the camera is used for shooting videos in the moving process;
acquiring a video shot by the camera under the trigger of the trigger signal;
and extracting the identification information corresponding to the people in the video from the video, and storing the corresponding relation between the identification information and the video.
2. The method of claim 1, wherein prior to receiving the trigger signal, the method further comprises:
sensing that a person appears in a shooting range of the camera device through a sensor arranged on the camera device;
and responding to the sensing of the sensor and sending out the trigger signal.
3. The method of claim 2, wherein the sensor is at least one of: the device comprises an infrared induction unit, a radio frequency induction unit and a radar detection unit.
4. The method of claim 1, wherein prior to receiving the trigger signal, the method further comprises:
receiving an operation of a switch in the image pickup device by a user;
issuing the trigger signal in response to the operation.
5. The method of claim 4, wherein the switch is at least one of: the device comprises a key switch unit, a touch switch unit and a photoelectric switch unit.
6. The method of claim 1, wherein prior to receiving the trigger signal, the method further comprises:
receiving the operation of a user on the software interface;
and responding to the operation, and sending the trigger signal to the camera device through a network.
7. The method of claim 4, wherein prior to receiving the user's operation on the software interface, the method further comprises:
acquiring identification information of the camera device by scanning a graphical code arranged on the camera device;
and displaying the operation which can be carried out on the camera device on the software interface according to the identification information.
8. The method of claim 4, wherein prior to receiving the user's operation on the software interface, the method further comprises:
acquiring the geographical position information of the handheld device under the condition that the handheld device of the person is displayed on the software interface;
and displaying the camera device which can be controlled by the person within a preset range from the handheld equipment on the software interface according to the geographic position information, and performing operation on the camera device.
9. The method according to claim 1, wherein after extracting identification information corresponding to the person in the video from the video and storing the correspondence between the identification information and the video, the method further comprises:
acquiring identity identification information of a video to be extracted, and searching one or more videos corresponding to the identity identification information of the video to be extracted in stored videos;
and displaying the one or more videos to a user corresponding to the identification information of the video to be extracted.
10. The method of claim 9, wherein extracting identification information corresponding to the people in the video from the video comprises: identifying from the person an attachment on the person and/or a biometric of the person; using the characteristic information of the attachment and/or the characteristic information of the biological characteristic as identification information for identifying the person;
the method for acquiring the identity identification information of the video to be extracted comprises the following steps: acquiring feature information of the attachment of the person and/or identity information of the biological feature of the person, and determining the feature information of the attachment and/or the feature information corresponding to the biological feature as the identity information of the person; wherein the attachment comprises at least one of: apparel, accessories, hand held articles; the biometric characteristic comprises at least one of: facial features, posture features; the attachment is for uniquely identifying the person in a predetermined area.
11. The method of claim 10, wherein extracting identification information corresponding to the people in the video from the video comprises: extracting the induced radio frequency identification information from the radio frequency signal; the radio frequency identification information is used as identity identification information for identifying the person; extracting identity identification information for identifying the person from the network trigger signal;
the method for acquiring the identity identification information of the video to be extracted comprises the following steps: acquiring induced radio frequency identification information extracted from the radio frequency signal, and determining the radio frequency identification information as the identity information of the person; and extracting the identity identification information for identifying the person from the network trigger signal.
12. The method of claim 11, wherein, when the number of people in the video is multiple, extracting the identification information corresponding to the people in the video from the video comprises:
identifying attachments and/or biometrics on each person from the plurality of persons;
and determining the characteristic information of the attachment on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each person in the plurality of persons, and storing the corresponding relation between the identification information of each person in the plurality of persons and the video.
13. The method of claim 12, wherein storing the correspondence between the identification information of each of the plurality of people and the video comprises:
determining a time node at which the identification information of each of the plurality of people is identified in the video;
taking the time node as a time tag of the identification information of each person in the plurality of persons;
and storing the identification information of each of the plurality of persons and the corresponding relation between the time labels added to the identification information of each of the plurality of persons and the video.
14. The method of claim 1, wherein the trajectory of the camera comprises at least one of: the target object tracking system comprises a reciprocating movement track between a preset starting point and a preset end point, a circulating movement track of a preset path, a movement track designed based on a preset programming program and a tracking movement track following the target object.
15. The method of claim 14, wherein the camera is moved in at least one of: orbital movement, rotational movement.
16. The method of claim 1, wherein the drive means is driven in at least one of: mechanical drive, electromagnetic drive and pressure drive.
17. A video processing apparatus, comprising:
the device comprises a first receiving unit, a second receiving unit and a control unit, wherein the first receiving unit is used for receiving a trigger signal, the trigger signal is used for triggering a camera device to shoot, the camera device comprises a driving device and a camera, and the driving device is used for driving the camera in the camera device to move; the camera is used for shooting videos in the moving process;
the first acquisition unit is used for acquiring a video shot by the camera under the trigger of the trigger signal;
and the extraction unit is used for extracting the identification information corresponding to the people in the video from the video and storing the corresponding relation between the identification information and the video.
18. The apparatus of claim 17, further comprising:
the sensing unit is used for sensing that a person is in the shooting range of the camera device through a sensor arranged on the camera device before the triggering signal is received;
and the first response unit is used for responding to the induction of the sensor and sending out the trigger signal.
19. The apparatus of claim 18, wherein the sensor is at least one of: the device comprises an infrared induction unit, a radio frequency induction unit and a radar detection unit.
20. The apparatus of claim 17, further comprising:
a second receiving unit configured to receive an operation of a switch in the image pickup apparatus by a user before receiving the trigger signal;
a second response unit for issuing the trigger signal in response to the operation.
21. The apparatus of claim 20, wherein the switch is at least one of: the device comprises a key switch unit, a touch switch unit and a photoelectric switch unit.
22. The apparatus of claim 17, further comprising:
the third receiving unit is used for receiving the operation of a user on the software interface before receiving the trigger signal;
and the third response unit is used for responding to the operation and sending the trigger signal to the image pickup device through a network.
23. The apparatus of claim 20, further comprising:
the third acquisition unit is used for acquiring the identification information of the camera device by scanning the graphical code arranged on the camera device before receiving the operation of a user on a software interface;
and the display unit is used for displaying the operation which can be carried out on the camera device on the software interface according to the identification information.
24. The apparatus of claim 20, further comprising:
the fourth acquisition unit is used for acquiring the geographical position information of the handheld equipment under the condition that the handheld equipment of the person is displayed on the software interface before the operation of the user on the software interface is received;
and the display unit is used for displaying the image pickup device which can be controlled by the person within a preset range from the handheld equipment on the software interface according to the geographical position information and performing operation on the image pickup device.
25. The apparatus of claim 17, further comprising:
the second acquisition unit is used for acquiring the identification information of the video to be extracted after the identification information corresponding to the person in the video is extracted from the video and the corresponding relation between the identification information and the video is stored, and searching one or more videos corresponding to the identification information of the video to be extracted from the stored videos;
and the display unit is used for displaying the one or more videos to a user corresponding to the identification information of the video to be extracted.
26. The apparatus according to any one of claims 17 to 25, wherein the extraction unit is configured to identify, from the person, an attachment on the person and/or a biometric feature of the person; using the characteristic information of the attachment and/or the characteristic information of the biological characteristic as identification information for identifying the person;
the second acquiring unit is configured to acquire feature information of an attachment of the person and/or identification information of a biometric feature of the person, and determine the feature information of the attachment and/or the feature information corresponding to the biometric feature as the identification information of the person; wherein the attachment comprises at least one of: apparel, accessories, hand held articles; the biometric characteristic comprises at least one of: facial features, posture features; the sticker is used to uniquely identify the person in a predetermined area.
27. The apparatus of claim 26, wherein the extracting unit is further configured to extract the sensed radio frequency identification information from the radio frequency signal; the radio frequency identification information is used as identity identification information for identifying the person; extracting identity identification information for identifying the person from the network trigger signal;
the second obtaining unit is further configured to obtain radio frequency identification information extracted and sensed from the radio frequency signal, and determine the radio frequency identification information as the identification information of the person; and extracting the identity identification information for identifying the person from the network trigger signal.
28. The apparatus according to claim 27, wherein in a case where the number of persons in the video is plural, the extracting unit includes:
a first identification module for identifying an attachment and/or biometric characteristic on each of the plurality of persons;
the first storage module is used for determining the characteristic information of the attachment on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each person in the plurality of persons and storing the corresponding relation between the identification information of each person in the plurality of persons and the video.
29. The apparatus of claim 28, wherein the first saving module comprises:
a first determining submodule, configured to determine a time node at which identification information of each of the plurality of persons is identified in the video;
the second determining submodule is used for taking the time node as a time label of the identity identification information of each person in the plurality of persons;
and the storage submodule is used for storing the identification information of each person in the plurality of persons and the corresponding relation between the time labels added to the identification information of each person in the plurality of persons and the video.
30. The apparatus of claim 17, wherein the trajectory of the camera comprises at least one of: the target object tracking system comprises a reciprocating movement track between a preset starting point and a preset end point, a circulating movement track of a preset path, a movement track designed based on a preset programming program and a tracking movement track following the target object.
31. The apparatus of claim 30, wherein the camera is moved in at least one of: orbital movement, rotational movement.
32. The apparatus of claim 17, wherein the drive means is driven in at least one of: mechanical drive, electromagnetic drive and pressure drive.
33. A storage medium characterized by comprising a stored program, wherein the program executes the video processing method of any one of claims 1 to 16.
34. A processor, characterized in that the processor is configured to run a program, wherein the program is configured to execute the video processing method according to any one of claims 1 to 16 when running.
CN201911380674.8A 2019-12-27 2019-12-27 Video processing method and device Pending CN113055586A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911380674.8A CN113055586A (en) 2019-12-27 2019-12-27 Video processing method and device
PCT/CN2020/134645 WO2021129382A1 (en) 2019-12-27 2020-12-08 Video processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911380674.8A CN113055586A (en) 2019-12-27 2019-12-27 Video processing method and device

Publications (1)

Publication Number Publication Date
CN113055586A true CN113055586A (en) 2021-06-29

Family

ID=76506813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911380674.8A Pending CN113055586A (en) 2019-12-27 2019-12-27 Video processing method and device

Country Status (2)

Country Link
CN (1) CN113055586A (en)
WO (1) WO2021129382A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113617698B (en) * 2021-08-20 2022-12-06 杭州海康机器人股份有限公司 Package tracing method, device and system, electronic equipment and storage medium
CN114247125B (en) * 2021-12-29 2022-12-06 尚道科技(深圳)有限公司 Method and system for recording score data in sports area based on identification module

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581062A (en) * 2014-12-26 2015-04-29 中通服公众信息产业股份有限公司 Video monitoring method and system capable of realizing identity information and video linkage
CN106559654A (en) * 2016-11-18 2017-04-05 广州炫智电子科技有限公司 A kind of recognition of face monitoring collection system and its control method
CN108388672A (en) * 2018-03-22 2018-08-10 西安艾润物联网技术服务有限责任公司 Lookup method, device and the computer readable storage medium of video
EP3367281A1 (en) * 2017-02-27 2018-08-29 Giesecke+Devrient Mobile Security GmbH Method for verifying the identity of a user
CN109905595A (en) * 2018-06-20 2019-06-18 成都市喜爱科技有限公司 A kind of method, apparatus, equipment and medium shot and play
CN110532432A (en) * 2019-08-21 2019-12-03 深圳供电局有限公司 A kind of personage's trajectory retrieval method and its system, computer readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10043551B2 (en) * 2015-06-25 2018-08-07 Intel Corporation Techniques to save or delete a video clip
CN105159959A (en) * 2015-08-20 2015-12-16 广东欧珀移动通信有限公司 Image file processing method and system
CN105279273B (en) * 2015-10-28 2018-03-16 广东欧珀移动通信有限公司 Photo classification method and device
CN106412429B (en) * 2016-09-30 2019-12-17 深圳春沐源控股有限公司 image processing method and device based on greenhouse
CN111368724B (en) * 2020-03-03 2023-09-29 成都市喜爱科技有限公司 Amusement image generation method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581062A (en) * 2014-12-26 2015-04-29 中通服公众信息产业股份有限公司 Video monitoring method and system capable of realizing identity information and video linkage
CN106559654A (en) * 2016-11-18 2017-04-05 广州炫智电子科技有限公司 A kind of recognition of face monitoring collection system and its control method
EP3367281A1 (en) * 2017-02-27 2018-08-29 Giesecke+Devrient Mobile Security GmbH Method for verifying the identity of a user
CN108388672A (en) * 2018-03-22 2018-08-10 西安艾润物联网技术服务有限责任公司 Lookup method, device and the computer readable storage medium of video
CN109905595A (en) * 2018-06-20 2019-06-18 成都市喜爱科技有限公司 A kind of method, apparatus, equipment and medium shot and play
CN110532432A (en) * 2019-08-21 2019-12-03 深圳供电局有限公司 A kind of personage's trajectory retrieval method and its system, computer readable storage medium

Also Published As

Publication number Publication date
WO2021129382A1 (en) 2021-07-01

Similar Documents

Publication Publication Date Title
CN113055586A (en) Video processing method and device
EP3585571B1 (en) Moving robot and control method thereof
US9224037B2 (en) Apparatus and method for controlling presentation of information toward human object
CN106843460B (en) Multiple target position capture positioning system and method based on multi-cam
CN209557972U (en) A kind of terminal automatic rising-sinking platform
CN110929596A (en) Shooting training system and method based on smart phone and artificial intelligence
CN104308530B (en) The isolator star-wheel Automated assembly device of view-based access control model detection
CN107654406B (en) Fan air supply control device, fan air supply control method and device
CN104308531B (en) The isolator star-wheel automatized assembly method of view-based access control model detection
CN109543652B (en) Intelligent skiing trainer, training result display method thereof and cloud server
CN103802111A (en) Chess playing robot
CN205656672U (en) 3D fitting room that forms images
CN106514671A (en) Intelligent doorman robot
CN201904848U (en) Tracking and shooting device based on multiple cameras
CN207050861U (en) A kind of Portable infrared imaging instrument
CN109558867A (en) Information collecting device, method and application
CN102867172B (en) A kind of human-eye positioning method, system and electronic equipment
CN106980847B (en) AR game and activity method and system based on ARMark generation and sharing
CN106041950A (en) System and method for controlling domestic robot
CN206330897U (en) A kind of online testing device of streamline product electrical property
CN106155115B (en) The fitup and control method that can precisely navigate
CN205408002U (en) All -round automatic device of shooing in scenic spot
CN105894573A (en) 3D imaging fitting room and imaging method thereof
CN207983331U (en) A kind of face tracking robot and face tracking equipment
CN114623400B (en) Sitting posture identification desk lamp system and identification method based on remote intelligent monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210629

RJ01 Rejection of invention patent application after publication