WO2021129382A1 - Video processing method and device - Google Patents

Video processing method and device Download PDF

Info

Publication number
WO2021129382A1
WO2021129382A1 PCT/CN2020/134645 CN2020134645W WO2021129382A1 WO 2021129382 A1 WO2021129382 A1 WO 2021129382A1 CN 2020134645 W CN2020134645 W CN 2020134645W WO 2021129382 A1 WO2021129382 A1 WO 2021129382A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
identification information
person
camera
trigger signal
Prior art date
Application number
PCT/CN2020/134645
Other languages
French (fr)
Chinese (zh)
Inventor
聂兰龙
Original Assignee
青岛千眼飞凤信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛千眼飞凤信息技术有限公司 filed Critical 青岛千眼飞凤信息技术有限公司
Publication of WO2021129382A1 publication Critical patent/WO2021129382A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Definitions

  • This application relates to the technical field of video processing, and in particular to a video processing method and device.
  • the way to obtain video is mainly to use cameras, mobile phones, video cameras, and drones to obtain them independently. There is a lack of methods and means to provide high-quality video services.
  • High-quality video requires not only video capture equipment with good imaging effects, but also auxiliary shooting means.
  • the video captured using a traditional fixed camera has a rigid background and lacks dynamic effects, which makes it impossible to achieve an ideal presentation.
  • photographers usually need to set up rail cars and shooting cantilevers to achieve the ideal effect.
  • the embodiments of the present application provide a video processing method and device to at least solve the technical problem of low reliability of video processing methods in the related art.
  • a video processing method including: receiving a trigger signal, wherein the trigger signal is used to trigger a camera device to shoot, and the camera device includes a driving device and a camera.
  • the driving device is used for driving the camera in the camera device to move; the camera is used for video shooting during the movement; acquiring the video captured by the camera under the trigger of the trigger signal; extracting from the video
  • the identity information corresponding to the person in the video is stored, and the corresponding relationship between the identity information and the video is stored.
  • the video processing method before receiving the trigger signal, further includes: sensing that a person is present in the shooting range of the camera device through a sensor provided on the camera device; Induction, and send out the trigger signal.
  • the senor is at least one of the following: an infrared sensing unit, a radio frequency sensing unit, and a radar detection unit.
  • the video processing method before receiving the trigger signal, further includes: receiving a user's operation of a switch in the camera device; and in response to the operation, sending the trigger signal.
  • the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
  • the video processing method before receiving the trigger signal, further includes: receiving a user's operation on a software interface; in response to the operation, sending the trigger signal to the camera device via a network.
  • the video processing method before receiving the user's operation on the software interface, further includes: acquiring the identification information of the camera device by scanning a graphical code set on the camera device; and according to the identification information The operations that can be performed on the camera device are displayed on the software interface.
  • the method further includes: acquiring Extract the identification information of the video, and search the saved videos for one or more videos corresponding to the identification information of the video to be extracted; display the one or more videos to the video to be extracted The user corresponding to the identification information.
  • the video processing method before receiving the user's operation on the software interface, further includes: acquiring the geographic location information of the handheld device when the software interface displays the human handheld device; The geographic location information displays on the software interface the camera devices that can be controlled by the person within a predetermined range from the handheld device, and the operations that can be performed on the camera device.
  • extracting the identification information corresponding to the person in the video from the video includes: extracting the sensed radio frequency identification information from the radio frequency signal; using the radio frequency identification information as the identity for identifying the person Identification information; extracting from the network trigger signal the identification information that identifies the person; acquiring the identification information of the video to be extracted includes: acquiring the radio frequency signal and extracting the sensed radio frequency identification information, and determining the radio frequency identification information as The identification information of the person; the identification information that identifies the person is extracted from the network trigger signal.
  • extracting the identification information corresponding to the person in the video from the video includes: recognizing attachments on the person and/or biological characteristics of the person from the person;
  • the feature information of the attachment and/or the feature information of the biological feature is used as the identification information for identifying the person;
  • obtaining the identification information of the video to be extracted includes: obtaining the feature information and/or of the attachment of the person
  • the identification information of the biological characteristics of the person, and the characteristic information of the attachment and/or the characteristic information corresponding to the biological characteristics are determined as the identification information of the person;
  • the attachment includes at least one of the following One: clothing, accessories, hand-held items;
  • the biological characteristics include at least one of the following: facial characteristics, posture characteristics; the attachment is used to uniquely identify the person in a predetermined area.
  • extracting the identity information corresponding to the persons in the video from the video includes: identifying each of the multiple persons The attachments and/or biological characteristics of a person; the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics are determined as the identification information of each of the multiple persons, and The corresponding relationship between the identification information of each of the multiple persons and the video is stored.
  • storing the corresponding relationship between the identification information of each of the plurality of people and the video includes: determining that the identification information of each of the plurality of people is identified in the video Time node; use the time node as the time label of the identification information of each of the plurality of people; use the identification information of each of the plurality of people as the identity of each of the plurality of people The corresponding relationship between the time tag added by the identification information and the video is saved.
  • the movement trajectory of the camera includes at least one of the following: a reciprocating movement trajectory between a predetermined starting point and a predetermined end point, a cyclic movement trajectory of a predetermined path, a movement trajectory designed based on a predetermined programming program, following the tracking movement of the target object Trajectory.
  • the movement of the camera is at least one of the following: orbital movement and rotation movement.
  • the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving.
  • a video processing device including: a first receiving unit, configured to receive a trigger signal, wherein the trigger signal is used to trigger a camera device to shoot, and the camera device It includes a driving device and a camera, the driving device is used to drive the camera in the camera device to move; the camera is used for video shooting during the movement; the first acquisition unit is used to acquire the camera in the The video taken under the trigger of the trigger signal; the extraction unit is used to extract the identity information corresponding to the person in the video from the video, and save the corresponding relationship between the identity information and the video .
  • the video processing device further includes: a sensing unit, configured to sense that a person is present in the shooting range of the camera device through a sensor provided on the camera device before the trigger signal is received; A response unit is used for sending out the trigger signal in response to the sensing of the sensor.
  • the senor is at least one of the following: an infrared sensing unit, a radio frequency sensing unit, and a radar detection unit.
  • the video processing device further includes: a second receiving unit, configured to receive a user's operation of a switch in the camera device before receiving the trigger signal; and a second response unit, configured to respond to The operation sends the trigger signal.
  • a second receiving unit configured to receive a user's operation of a switch in the camera device before receiving the trigger signal
  • a second response unit configured to respond to The operation sends the trigger signal.
  • the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
  • the video processing device further includes: a third receiving unit, configured to receive a user's operation on the software interface before receiving the trigger signal; a third response unit, configured to respond to the operation, The trigger signal is sent to the camera device via the network.
  • a third receiving unit configured to receive a user's operation on the software interface before receiving the trigger signal
  • a third response unit configured to respond to the operation, The trigger signal is sent to the camera device via the network.
  • the device further includes: a third obtaining unit, configured to obtain the identification information of the camera device by scanning a graphical code set on the camera device before receiving the user's operation on the software interface;
  • the display unit is configured to display the operations that can be performed on the camera device on the software interface according to the identification information.
  • the video processing apparatus further includes: a fourth acquiring unit, configured to acquire the handheld device when the software interface displays the human handheld device before receiving the user's operation on the software interface Geographic location information of the device; a display unit for displaying on the software interface the camera device that can be controlled by the person within a predetermined range of the handheld device according to the geographic location information, and the The operation performed by the camera.
  • a fourth acquiring unit configured to acquire the handheld device when the software interface displays the human handheld device before receiving the user's operation on the software interface Geographic location information of the device
  • a display unit for displaying on the software interface the camera device that can be controlled by the person within a predetermined range of the handheld device according to the geographic location information, and the The operation performed by the camera.
  • the device further includes: a second acquiring unit, configured to extract from the video the identity information corresponding to the person in the video, and to correlate the identity information with the video After the relationship is saved, the identification information of the video to be extracted is obtained, and one or more videos corresponding to the identification information of the video to be extracted are searched in the saved videos; the display unit is used to display the one or more Or multiple videos are displayed to the user corresponding to the identification information of the video to be extracted.
  • a second acquiring unit configured to extract from the video the identity information corresponding to the person in the video, and to correlate the identity information with the video
  • the identification information of the video to be extracted is obtained, and one or more videos corresponding to the identification information of the video to be extracted are searched in the saved videos
  • the display unit is used to display the one or more Or multiple videos are displayed to the user corresponding to the identification information of the video to be extracted.
  • the extraction unit is configured to identify the attachments on the person and/or the biological characteristics of the person from the person; and combine the characteristic information of the attachments and/or the biological characteristics of the person
  • the characteristic information of the person is used as the identification information for identifying the person
  • the second acquiring unit is used to acquire the characteristic information of the person’s attachments and/or the identity information of the person’s biological characteristics
  • the The characteristic information of the attachment and/or the characteristic information corresponding to the biological characteristic is determined to be the identification information of the person;
  • the attachment includes at least one of the following: clothing, accessories, and handheld objects;
  • the biological characteristics include At least one of the following: facial features, posture features; the attachment is used to uniquely identify the person in a predetermined area.
  • the extraction unit is further configured to extract the sensed radio frequency identification information from the radio frequency signal; use the radio frequency identification information as the identification information for identifying the person; extract the identification information from the network trigger signal.
  • the identification information of the person; the second acquisition unit is further configured to acquire the radio frequency identification information sensed from the radio frequency signal, and determine the radio frequency identification information as the identification information of the person; from the network
  • the trigger signal is extracted to obtain the identification information that identifies the person.
  • the extraction unit includes: a first identification module, configured to identify attachments and/or attachments on each person from the multiple people Or biological characteristics; the first storage module is used to determine the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each of the multiple persons, and The corresponding relationship between the identification information of each of the multiple persons and the video is stored.
  • a first identification module configured to identify attachments and/or attachments on each person from the multiple people Or biological characteristics
  • the first storage module is used to determine the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each of the multiple persons, and The corresponding relationship between the identification information of each of the multiple persons and the video is stored.
  • the first saving module includes: a first determining submodule, configured to determine the time node at which the identification information of each of the plurality of people is recognized in the video; and a second determining submodule , Used to use the time node as the time label of the identification information of each of the plurality of people; a save sub-module for storing the identification information of each of the plurality of people and the identification information of each of the plurality of people The corresponding relationship between the time tag added to each individual's identification information and the video is stored.
  • the movement trajectory of the camera includes at least one of the following: a reciprocating movement trajectory between a predetermined starting point and a predetermined end point, a cyclic movement trajectory of a predetermined path, a movement trajectory designed based on a predetermined programming program, following the tracking movement of the target object Trajectory.
  • the movement of the camera is at least one of the following: orbital movement and rotation movement.
  • the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving.
  • a storage medium includes a stored program, wherein the program executes the video processing method described in any one of the foregoing.
  • a processor is also provided, the processor is configured to run a program, wherein the video processing method described in any one of the above is executed when the program is running.
  • the trigger signal is received, where the trigger signal is used to trigger the camera device to shoot.
  • the camera device includes a driving device and a camera.
  • the driving device is used to drive the camera in the camera to move; the camera is used to move the camera.
  • the processing method achieves the purpose of pre-saving the video containing the user and the identification information of the person identified from the video, and querying the user's video from the pre-stored video library according to the user's identification information, to achieve The technical effect of improving the reliability of the video processing method is improved, the user experience is improved, and the technical problem of low reliability of the video processing method in related technologies is solved.
  • Fig. 1 is a flowchart of a video processing method according to Embodiment 1 of the present application
  • Fig. 2 is a schematic diagram of an application scenario of a photographing device according to the first embodiment of the present application
  • Fig. 3 is a schematic diagram of a photographing device according to the first embodiment of the present application.
  • Fig. 4 is a schematic diagram 1 of an application scenario of a photographing device according to a second embodiment of the present application.
  • FIG. 5 is a second schematic diagram of an application scenario of the photographing device according to the second embodiment of the present application.
  • Fig. 6 is a schematic diagram of a photographing device according to a second embodiment of the present application.
  • Fig. 7 is a schematic diagram 1 of an application scenario of a photographing device according to a third embodiment of the present application.
  • FIG. 8 is a schematic diagram of a second application scenario of a photographing device according to the third embodiment of the present application.
  • Fig. 9 is a schematic diagram 1 of an application scenario of a photographing device according to a fourth embodiment of the present application.
  • FIG. 10 is a schematic diagram of a second application scenario of a photographing device according to the fourth embodiment of the present application.
  • Fig. 11 is a schematic diagram of a photographing device according to a fourth embodiment of the present application.
  • FIG. 12 is a schematic diagram of an application scenario of a photographing device according to Embodiment 5 of the present application.
  • Fig. 13 is a schematic diagram of an application scenario of a photographing device according to the sixth embodiment of the present application.
  • Fig. 14 is a schematic diagram of a photographing device according to a sixth embodiment of the present application.
  • FIG. 15 is a schematic diagram of an application scenario of a photographing device according to Embodiment 7 of the present application.
  • Fig. 16 is a schematic diagram of a photographing device according to a seventh embodiment of the present application.
  • FIG. 17 is a schematic diagram of an application scenario of a photographing device according to Embodiment 8 of the present application.
  • Fig. 18 is a schematic diagram of a photographing device according to the eighth embodiment of the present application.
  • Fig. 19 is a schematic diagram of an application scenario of a photographing device according to the ninth embodiment of the present application.
  • Fig. 20 is a schematic diagram of a photographing device according to the ninth embodiment of the present application.
  • FIG. 21 is a schematic diagram of an application scenario of a photographing device according to the tenth embodiment of the present application.
  • Fig. 22 is a schematic diagram of a photographing device according to the tenth embodiment of the present application.
  • Fig. 23 is a schematic diagram of an application scenario of a photographing device according to the eleventh embodiment of the present application.
  • Fig. 24 is a schematic diagram of a photographing device according to the eleventh embodiment of the present application.
  • FIG. 25 is a schematic diagram of an application scenario of a photographing device according to the twelfth embodiment of the present application.
  • Fig. 26 is a schematic diagram of a photographing device according to the twelfth embodiment of the present application.
  • FIG. 27 is a schematic diagram of an application scenario of a photographing device according to the thirteenth embodiment of the present application.
  • Fig. 28 is a schematic diagram of a photographing device according to a thirteenth embodiment of the present application.
  • FIG. 29 is a schematic diagram of an application scenario of a photographing device according to the fourteenth embodiment of the present application.
  • Fig. 30 is a schematic diagram of a photographing device according to a fourteenth embodiment of the present application.
  • FIG. 31 is a schematic diagram of an application scenario of a photographing device according to the fifteenth embodiment of the present application.
  • Fig. 32 is a schematic diagram of a photographing device according to the fifteenth embodiment of the present application.
  • Fig. 33 is a schematic diagram of an application scenario of a photographing device according to the sixteenth embodiment of the present application.
  • Fig. 34 is a schematic diagram of a photographing device according to a sixteenth embodiment of the present application.
  • Fig. 35 is a schematic diagram 1 of an application scenario of a photographing device according to the seventeenth embodiment of the present application.
  • Fig. 36 is a schematic diagram 2 of an application scenario of a photographing device according to the seventeenth embodiment of the present application.
  • Fig. 37 is a schematic diagram of a photographing device according to a seventeenth embodiment of the present application.
  • Fig. 38 is a schematic diagram of a video processing device according to the eighteenth embodiment of the present application.
  • a method embodiment of a video processing method is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings can be executed in a computer system such as a set of computer-executable instructions, and, Although a logical sequence is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than here.
  • Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application. As shown in Fig. 1, the video processing method includes the following steps:
  • Step S102 Receive a trigger signal, where the trigger signal is used to trigger the camera device to shoot, the camera device includes a drive device and a camera, the drive device is used to drive the camera in the camera device to move; the camera is used to perform video during the movement Shooting.
  • the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving.
  • mechanical driving can be realized by rollers, ropes, conveyor belts, screw rods, etc.
  • electromagnetic driving can be realized by linear motors, magnetic levitation, etc.
  • pressure driving can be realized by fluid pressure such as hydraulic or pneumatic, for example, water energy, Wind energy, hydraulic pump function, air pump function, etc.
  • the driving device drives the camera in the camera device to move
  • it can be realized by installing a certain movement track, where the movement track of the camera can include at least one of the following: a reciprocating movement track between a predetermined starting point and a predetermined ending point, and a predetermined path
  • the cyclic movement trajectory is based on the movement trajectory designed by a predetermined programming program and follows the tracking movement trajectory of the target object.
  • the movement trajectory can be a reciprocating movement between a predetermined start point and an end point, can be a cyclic movement according to a predetermined path, can be a movement that executes a predetermined programming program, can be a tracking movement following the target person, or a combination of the above movement modes;
  • Mixed execution of movement modes for example, a predetermined program is executed after the target person is tracked for a fixed period of time, and the target person is tracked for a fixed period of time after the execution of the predetermined program is completed.
  • the movement mode of the camera is at least one of the following: orbital movement and rotational movement.
  • the track can be a sliding track, a telescopic track, or a rope track; in a rotary movement, it can be rotated by a single joint, or can be rotated by multiple joints, and the rotating mechanism can be a rocker arm or a runner; in addition, ,
  • the movement method can also be a mixed movement of orbital movement and rotary movement.
  • the video processing method may further include: receiving a user's operation on the software interface; in response to the operation, sending a trigger signal to the camera device via the network.
  • the video processing method may further include: acquiring the identification information of the camera device by scanning a graphical code set on the camera device; The operation performed by the device.
  • the video processing method may further include: acquiring the geographic location information of the handheld device when the software interface displays the human handheld device; and displaying the geographic location information on the software interface based on the geographic location information Displays the camera devices that can be controlled by a person within a predetermined range from the handheld device, and the operations that can be performed on the camera device.
  • the data information obtained by the network and sent by the portable terminal APP contains the user identification and confirms the use of the video capture device.
  • the data information for confirming the use of the video capture device can be generated by the user entering the code of the video capture device in the APP application and confirming the use, it can be generated by the user using the APP application to scan the QR code of the video capture device, and it can be portable
  • the terminal APP is generated based on the current location information of the portable terminal and the location information of the video capture device reaching a predetermined condition.
  • Step S104 Acquire a video captured by the camera under the trigger of the trigger signal.
  • the video processing method may further include: sensing that someone is present in the shooting range of the camera device through a sensor set on the camera device; Trigger signal.
  • the aforementioned sensor is at least one of the following: an infrared sensing unit, a radio frequency sensing unit, and a radar detection unit.
  • the senor used to trigger the camera device may also be an infrared sensing unit that responds to infrared rays of the human body, a radio frequency sensing unit set to respond to radio frequency signals, or a radar responding to moving objects. Detection unit.
  • the radio frequency sensing unit may be a radio frequency identification RFID card.
  • the video processing method may further include: receiving a user's operation of a switch in the camera device; and in response to the operation, sending a trigger signal.
  • the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
  • the sensor used to trigger the camera device may also be: a photoelectric switch unit that responds to the passage of a human body, a key switch unit that responds to pressing by the human body, or a touch switch unit that responds to human touch. .
  • the feature trigger unit that is used to trigger the start of the camera device can also be a feature trigger unit that responds to the person's gesture information, mouth shape information, and body shape information. It can be an instruction switch that responds to a network signal, for example, an instruction issued by a portable terminal APP through the network. Information, instruction information can be generated by the operation of the nearby video capture device selected in the APP, it can be entered in the APP of the code of the video capture device or the QR code of the device is scanned by the APP, and it can be that the APP obtains the positioning coordinates of the terminal in the mobile The video capture device is generated within the area set by the video capture device.
  • Step S106 Extract the identity information corresponding to the person in the video, and save the corresponding relationship between the identity information and the video.
  • extracting the identification information corresponding to the person in the video may include: identifying attachments and/or biological characteristics of the person from the person; combining the characteristic information and/or biological characteristics of the attachment As the identification information used to identify the person.
  • the acquired video and the identification information identified from the video are saved correspondingly, and the support system can perform corresponding extraction operations on the video according to the corresponding saving, so that visitors can easily obtain video clips containing themselves. .
  • the acquired video and the identification information identified from the video are stored correspondingly.
  • the video file may be mapped to the database and stored corresponding to the identification identification, or the summary information of the identification identification may be included in the video file.
  • a trigger signal can be received, where the trigger signal is used to trigger the camera device to shoot.
  • the camera device includes a driving device and a camera, and the driving device is used to drive the camera in the camera to move; For video shooting in the process of moving; to obtain the video captured by the camera under the trigger signal; to extract the identity information corresponding to the person in the video, and to save the corresponding relationship between the identity information and the video, realizing the advance The purpose of correspondingly saving the video containing the user and the identification information of the person identified from the video.
  • the video is identified to obtain the identification information of the person in the video, and the video and the identified identification information are stored correspondingly, so that when the user requests to obtain his own video
  • a video that is consistent with the user’s identity information can be called from the video library where the video is stored, so that the video containing the user can be pre-matched with the identity information of the person identified from the video.
  • the purpose of saving and obtaining the user's video from the pre-stored video library according to the user's identity identification information is achieved, which achieves the technical effect of improving the reliability of the video processing method and improves the user experience.
  • the video processing method in the present application solves the technical problem of low reliability of the video processing method in the related art.
  • the video processing method further includes: obtaining the video to be extracted And search for one or more videos corresponding to the identification information of the video to be extracted in the saved videos; display one or more videos to the user corresponding to the identification information of the video to be extracted.
  • acquiring the identification information of the video to be extracted may include: acquiring the characteristic information of the attachment of a person and/or the identification information of the biological characteristic of the person, and combining the characteristic information of the attachment and/or the characteristic information corresponding to the biological characteristic Identification information determined as a person; wherein the attachment includes at least one of the following: clothing, accessories, and hand-held objects; the biological characteristics include at least one of the following: facial features, posture characteristics; the attachment is used to uniquely identify a person in a predetermined area.
  • extracting the identification information corresponding to the people in the video may include: identifying attachments and/or biological characteristics of each person from multiple people; The characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics are determined as the identification information of each of the multiple people, and the corresponding relationship between the identification information of each of the multiple people and the video is determined save.
  • storing the corresponding relationship between the identification information of each of the multiple people and the video includes: determining the time node at which the identification information of each of the multiple people is recognized in the video ; The time node is used as the time label of the identification information of each of the multiple people; the identification information of each of the multiple people and the time label added for the identification information of each of the multiple people are corresponding to the video The relationship is saved.
  • extracting the identification information corresponding to the person in the video from the video includes: extracting the sensed radio frequency identification information from the radio frequency signal; using the radio frequency identification information as the identification information for the person Identification information; the identification information that identifies the person is extracted from the network trigger signal; the identification information of the video to be extracted includes: extracting the sensed radio frequency identification information from the radio frequency signal, and determining the radio frequency identification information as the identity of the person Information; the identification information that identifies the person is extracted from the network trigger signal.
  • extracting the identification information corresponding to the person in the video from the video can be implemented by an identification unit, which can be a radio frequency identification unit carried by a person, such as an RFID card.
  • extracting the identification information corresponding to the person in the video from the video can be realized by the identification unit, which can be the identification information obtained by the network, for example, the network obtained by the portable terminal APP issued by the user contains the user Identification and confirmation of the use of the video capture device data information.
  • the data information for confirming the use of the video capture device can be generated by the user entering the code of the video capture device in the APP application and confirming the use, it can be generated by the user using the APP application to scan the QR code of the video capture device, and it can be portable
  • the terminal APP is generated based on the current location information of the portable terminal and the location information of the video capture device reaching a predetermined condition.
  • Figure 2 is a schematic diagram of the application scenario of the camera according to the first embodiment of the application
  • Figure 3 is a schematic diagram of the camera according to the first embodiment of the application, as shown in Figures 2 and 3, in this embodiment, a convex track is used to reciprocate Movement
  • the driving method is roller drive
  • the movement track is straight back and forth.
  • the trigger method adopted is the human body infrared sensor switch
  • the track of the camera device (including the track and the video capture car) is convex. track.
  • the proximity switch senses the limit iron block and stops the movement in the current direction. Start moving in the opposite direction. Move to the limit iron block at the other end, in response to the proximity switch sensing signal, stop the movement in the current direction, and start the movement in the opposite direction.
  • the video captures the uninterrupted linear motion of the sports car back and forth between the two limit iron blocks.
  • the video capture sports car responds to the signal sensed by the infrared sensor switch of the human body, and starts the uninterrupted round-trip linear motion when the human body infrared is sensed, and executes the operation of acquiring the video; if the human body infrared is not sensed, it stops The uninterrupted reciprocating linear movement.
  • the video capture vehicle (or called a sports car) is equipped with running wheels and provides wireless connection.
  • Proximity switches are arranged at both ends of the sports car, and carbon brushes are arranged at the front and rear of the sports car; the two ends of the track are arranged with limit iron posts that can be sensed by the proximity switch, and conductive cables are arranged on both sides of the angle steel track to provide electricity for the sports car.
  • the video segment can be generated in the following way: extract the video segment from the continuously captured video stream, take the time point when the RFID radio frequency signal is sensed as the first frame of the video segment, and take the time point when the RFID radio frequency signal disappears as sensed
  • the end frame of the video clip The sensed RFID radio frequency identification is saved in the extracted video file abstract.
  • the RFID radio frequency card triggers the sports car to start linear reciprocating movement through the human body infrared sensor switch and obtain the video operation, which can avoid the device from performing invalid operations, and can automatically capture the tourists' mobile video at any time. Behind the booth of the exhibition hall, the video captures the sports car reciprocating on the corner track, continuously shooting the same frame video of the visitors and the exhibits.
  • FIG. 4 is a schematic diagram of the first application scenario of a photographing device according to the second embodiment of the application
  • FIG. 5 is a schematic diagram two of the application scenario of the photographing device according to the second embodiment of the present application
  • FIG. 6 is a schematic diagram of the photographing device according to the second embodiment of the present application, as shown in FIG. 4.
  • a concave circular track is used, its driving mode is a timing belt wheel, and its motion track is a curve cycle-character tracking.
  • the trigger mode is facial recognition, and the movement form is a concave track.
  • a video capture sports car is started, and the target person is tracked from the starting point to the end point, and after reaching the end point, it queues with other video capture sports cars to wait for the servo.
  • the video capture car (or called a sports car) uses a DC miniature gear reducer motor, a synchronous wheel, and a toothed synchronous belt.
  • the video capture car can be connected wirelessly.
  • the first frame of the video clip starts when the video capture sports car starts to move, and the video capture sports car moves to the bottom end of the track to generate the end frame.
  • the video clip does not contain the capture information from the top of the track to the end.
  • the file name of the generated video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • facial feature recognition is used. By recognizing tourists with registered facial features, the camera can be triggered to start the mobile vehicle to perform the mobile acquisition video operation, which can autonomously track and shoot, and achieve a presentation effect that cannot be achieved by conventional shooting methods.
  • the first video captures the sports car to automatically track and shoot.
  • the second video captures the sports car in a ready position.
  • On the other side of the track there are four video-capture sports cars waiting for the servo.
  • the video captures the sports car in a concave track.
  • the sports car is powered by two electric carbon brushes.
  • the sports car is equipped with a DC miniature gear reducer motor.
  • the synchronous wheel on the output shaft of the reducer meshes with the toothed synchronous belt pasted in the track.
  • Two conductive cables are pasted on one side wall of the track to provide electricity for the video capture sports car through the carbon brush of the sports car.
  • FIG. 7 is a schematic diagram of the first application scenario of the photographing device according to the third embodiment of the present application
  • FIG. 8 is a schematic diagram of the second application scenario of the photographing device according to the third embodiment of the present application, as shown in FIG. 7 and FIG.
  • the video captures the sports car on the ceiling of the T-shaped show stage performance hall, which moves circularly along the curve of the snake-shaped slit track.
  • the slit-shaped snake track is adopted.
  • the driving mode is a roller, and the movement track is a curve loop.
  • the example can be applied to the T-shaped runway.
  • a master control switch is used for triggering, and the track can be a ceiling cracked track.
  • the video capture sports car cyclically moves along the track curve; after the main control switch is opened, the video capture sports car stops moving.
  • the recognition method in this embodiment is facial feature recognition.
  • the video segment is extracted from the continuously captured video stream, and a fixed time point before the time point when the facial feature is recognized is the first frame of the video segment, and the first frame time point is The fixed time point afterwards is the end frame.
  • the ingested video fragment is analyzed for the degree of facial expression richness change, and the video fragment that meets the determination of the degree of facial expression richness change is stored corresponding to the facial feature recognition identifier.
  • the file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • FIG. 9 is a schematic diagram of the first application scenario of a photographing device according to the fourth embodiment of the application
  • FIG. 10 is a schematic diagram of the second application scenario of the photographing device according to the fourth embodiment of the present application
  • FIG. 11 is a schematic diagram of the photographing device according to the fourth embodiment of the present application, as shown in FIG. 9.
  • the video capture device is located at the lowest end of the hexagonal column in the non-working state
  • the target person stands on the lifting platform
  • the video capture device is dragged to the corner of the hexagonal column by a drawstring.
  • Top The movable frame is arranged on the periphery of the hexagonal column to drive the video capture device to move.
  • This embodiment uses a rod-shaped octahedral column, a pull rope, and a way of rotating up and down.
  • This embodiment can be applied to urban parks.
  • the trigger mode of this embodiment is gravity downward pressure
  • the movement mode is a vertical pole track
  • the driving mode is Activities frame.
  • the pull rope between the lifting platform and the video capture movable frame drives the video capture movable frame to rotate and move. For example, if the lifting platform is pressed and dropped by gravity, the video capture movable frame performs rotation and upward movement, for example, the lifting platform Cancel the gravity downward pressure rise, then the video capture activity frame will perform a rotating downward movement.
  • an external network instruction identifier can be used, for example, an APP registered user scans the code of the mobile video acquisition device or enters the device number (APP registration identifier).
  • the way of generating the video can be as follows: a proximity switch is provided on the pole, and the movable frame is in the lowest position in response to the video capture.
  • the video capture frame leaves the lowest position, and responds to the signal from the proximity switch to start capturing video and generates the first frame of the video clip; the video capture frame returns to the lowest position and responds to the signal from the proximity switch to generate the end frame of the video clip, and Stop video capture.
  • the received network trigger identifier is stored in the generated video file summary.
  • the portable terminal APP uses the portable terminal APP to scan the code to confirm the mobile video acquisition device, and use this device to perform mobile acquisition video operations.
  • the shooting angle of view can be gradually increased from the horizontal angle of view containing the user to the high-altitude angle of view of the panoramic view of the scenic area, which cannot be achieved by conventional shooting methods.
  • the rendering effect is not limited to:
  • FIG. 12 is a schematic diagram of an application scenario of a photographing device according to Embodiment 5 of the present application.
  • this embodiment can be applied to a climbing ladder, and a rope track is provided on one side of the climbing ladder.
  • the rope puller drags the video capture device to slide along the rope track.
  • This embodiment can be referred to as a rope type.
  • a rope puller is used as a drive, a fixed trajectory is adopted, the trigger mode is an RFID radio frequency card, and the movement form is a rope track.
  • Both ends of the rope track and the mobile video acquisition device are equipped with RFID readers, which can read passive RFID radio frequency cards that meet the ISO/IEC18000-6 standard 860-960MHz air interface parameters within 10m.
  • the video capture unit is connected to the pull rope, and a rope puller is arranged at the top of the rope track.
  • the rope puller drags the mobile video acquisition device to perform reciprocating movement at both ends of the rope track, and pulls it for a fixed period of time in response to the time point when the RFID reader cannot receive the radio frequency signal.
  • the rope device stops dragging the mobile video capture device.
  • the rope puller reduces the drag speed; in response to the RFID reader set on the mobile video capture device cannot receive Radio frequency information identification, the rope puller resumes towing speed.
  • an RFID radio frequency card is used as identity recognition.
  • video capture is performed.
  • the video capture is executed after a fixed period of time and the video file is generated.
  • the file name of the generated video file is mapped in a database, and the video file in the database corresponds to a plurality of RFID radio frequency information containing time tags.
  • the RFID radio frequency information including the time tag records the sensed RFID radio frequency information and the corresponding time point information, the time point information is the time point information when the radio frequency information is received, and the time point information is not received Time point information of the radio frequency information.
  • the mobile video capture device can capture the video of users within 10m, and reduce the speed of movement when moving to the vicinity of the user to enhance the presentation effect of shooting for the user.
  • FIG. 13 is a schematic diagram of an application scenario of a photographing device according to Embodiment 6 of the present application
  • FIG. 14 is a schematic diagram of a photographing device according to Embodiment 6 of the present application.
  • this embodiment can be applied to trampoline sports.
  • the video capture device tracks the target person on the trampoline moving up and down to shoot.
  • Linear motor induction coils are arranged on both sides of the video capture device, which respectively penetrate two tubular magnetic shafts.
  • This embodiment can be called a linear motor type, and its driving mode is a tubular high-speed servo linear motor, and its trajectory is linear reciprocating.
  • the movement form can be a tubular track.
  • the tubular high-speed servo linear motor responds to the video image portrait analysis of the background server. After the character enters the target area, the magnetic axis linear motor moves to the position corresponding to the height of the character's head, and responds to the video image portrait analysis Track the height of the person's head to track the movement.
  • the trigger method used may be an external network instruction method.
  • the portable terminal APP detects that the positioning coordinates are located in a predetermined area and sends a trigger instruction to the mobile video acquisition device in the predetermined area. If the positioning coordinates of the portable terminal are located in the setting area of the mobile video acquisition device, the APP registration identifier can be acquired, which is used as the identification identifier. Extract video clips from the continuously captured video stream, take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point The fixed time point is the end frame of the video clip.
  • Slow motion effect processing is performed on the extracted video segment, and the extracted video segment is saved corresponding to the facial feature recognition identifier.
  • the file name of the generated video file is mapped in the database, and the video file in the database corresponds to the network trigger identifier.
  • the high-speed servo linear motor drives the high-speed camera to track the movement in response to the moving state of the person, and achieve the effect that cannot be presented in conventional shooting through the slow-motion effect.
  • the seventh to tenth embodiments and the thirteenth to seventeenth all use facial recognition as the identity recognition method
  • the eleventh embodiment uses an RFID radio frequency card as the identity recognition method
  • the twelfth embodiment The image feature recognition of the name badge is used as the identification method.
  • FIG. 15 is a schematic diagram of the application scenario of the photographing device according to the seventh embodiment of the present application.
  • FIG. 16 is a schematic diagram of the photographing device according to the seventh embodiment of the present application.
  • this embodiment can be called a magnetic levitation track, which can be applied In the Science and Technology Museum, in this embodiment, the target person touches and clicks on the touch screen to trigger the magnetic levitation video capture of the sports car to move back and forth in a straight line.
  • the lower part of the magnetic levitation video capture sports car is provided with two skateboards, and the skateboard has a built-in magnet to make the video capture sports car levitate on the track.
  • the driving mode is a magnet and an induction coil
  • the trajectory is a straight back and forth
  • the trigger mode is a touch switch
  • the movement mode is a magnetic levitation track.
  • the magnetic levitation video capture sports car is equipped with magnets on both sides of the skateboard, and the magnets with the same magnetism as the sports car magnet are arranged on both sides of the track, and the magnetic levitation video captures the sports car floating above the track.
  • a horizontal magnetic column is arranged in the middle of the magnetic levitation video capture sports car, and a horizontal coil is arranged in the middle of the track to attract the horizontal magnetic column. The horizontal coil in the middle of the track drives the horizontal magnetic column of the sports car to move.
  • the magnetic levitation video captures the sports car to perform reciprocating movement on the track according to a predetermined programmed trajectory.
  • a fixed number of linear reciprocating movements are initiated, and the operation of acquiring the video is performed.
  • Extract video clips from the continuously captured video stream take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point
  • the fixed time point is the end frame of the video clip.
  • the extracted video clips are saved corresponding to the facial feature recognition identifiers.
  • the file name of the generated video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • the human body infrared sensor switch triggers the sports car to start linear reciprocating movement and obtain video operations, which can prevent the device from performing invalid operations and automatically capture tourists on mobile at any time.
  • FIG. 17 is a schematic diagram of an application scenario of a photographing device according to Embodiment 8 of the present application
  • FIG. 18 is a schematic diagram of a photographing device according to Embodiment 8 of the present application.
  • the lifting screw sliding table tracks the target person for moving and shooting.
  • the servo motor drives the lead screw to rotate and drives the video capture device to move up and down.
  • the driving mode of this embodiment is a lead screw and a sliding table, its trajectory is a tracking trajectory, its trigger mode is a portrait analysis, and its movement form is a linear track.
  • the lead screw sliding table responds to the video image portrait analysis of the back-end server, the sliding table moves to the position corresponding to the height of the person's head after the person enters the target area, and tracks the height of the person's head in response to the video image portrait analysis.
  • Extract video clips from the continuously captured video stream take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point
  • the fixed time point is the end frame of the video clip.
  • the extracted video clips are saved corresponding to the facial feature recognition identifiers.
  • the file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • FIG. 19 is a schematic diagram of an application scenario of a photographing device according to Embodiment 9 of the present application
  • FIG. 20 is a schematic diagram of a photographing device according to Embodiment 9 of the present application.
  • this embodiment can be referred to as the implementation of the hanging wheel mode.
  • this embodiment can be applied to a passageway in a scenic spot, where a mobile video capture device installed on the upper part of a street light pole moves up and down to shoot a target person.
  • the upper part of the street light pole is equipped with a spherical radar sensor device, a rope puller, and a hanging wheel video capture device.
  • the driving mode of this embodiment is a rope puller, the trajectory is up and down, and the trigger mode is that the radar sensor switch moves in the form of a sling reciprocation.
  • the video capture device moves up and down along the sling Moving; when the radar sensor switch detects no moving objects, the video capture device stops moving.
  • a video clip is extracted from a continuously captured video stream, and a fixed time point before the time point when the facial feature is recognized is the first frame of the video clip, and a fixed time point after the first frame time point is the end frame. Extract video clips for different facial feature recognition subjects, and save the extracted video clips corresponding to the facial feature recognition identifiers.
  • the file summary of the generated video file corresponds to multiple facial feature recognition identifiers containing time stamps.
  • the facial feature recognition identifier containing the time tag records the recognized facial features and the corresponding time point information.
  • the time point information is the time point information at which the facial feature is recognized, and the time point information is that the face cannot be recognized The time point information of the feature.
  • the autonomous shooting angle of view can be gradually increased from the horizontal angle of view that includes tourists to the high-altitude angle of view that overlooks the entire scenery of the scenic area, and achieves a presentation effect that cannot be achieved by conventional shooting methods.
  • FIG. 21 is a schematic diagram of an application scenario of a photographing device according to Embodiment 10 of the present application.
  • FIG. 22 is a schematic diagram of a photographing device according to Embodiment 10 of the present application.
  • this embodiment can be called a conveyor belt type. It is applied to the hot spots of crowded scenic spots. In the hot spots of scenic spots, the video capture device on the conveyor belt continuously moves the people in the shooting area.
  • the conveyor belt is driven by a geared motor, which drives multiple video capture devices to shoot.
  • the driving mode of this embodiment is a reduction motor and a runner, the trajectory is a belt loop, the trigger mode is a master control switch, and the movement mode is a conveyor belt loop.
  • a video clip is extracted from a continuously captured video stream, and a fixed time point before the time point when the facial feature is recognized is the first frame of the video clip, and a fixed time point after the first frame time point is the end frame. Extract video clips for different facial feature recognition subjects, and save the extracted video clips corresponding to the facial feature recognition identifiers.
  • the file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • FIG. 23 is a schematic diagram of an application scenario of a photographing device according to Embodiment 11 of the present application
  • FIG. 24 is a schematic diagram of a photographing device according to Embodiment 11 of the present application.
  • this embodiment may be called a scissor lift type
  • This embodiment can be applied to temporary scenic spots.
  • the video capture device is temporarily placed at the temporary scenic spot to shoot the target person from the bottom up.
  • the hydraulic station drives the oil cylinder to drive the lifting device of the scissor fork structure to reciprocate up and down.
  • the driving mode of this embodiment is oil pressure
  • the trajectory is a programmed trajectory
  • the trigger mode is a master control switch
  • the movement form is a retractable lifting structure, such as a scissor fork lifting.
  • the fixed long time point after the RFID reader fails to receive the radio frequency information identification is the end frame of the video clip.
  • the file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the RFID radio frequency information identification.
  • the mobile video capture device can capture the video of users within 10m, and can complete the autonomous shooting angle from the horizontal angle of the tourist to the high-altitude angle of view of the entire scenic spot, enhancing the user-oriented shooting Present the effect.
  • the mobile video acquisition device can be dragged to a temporary location at any time to perform the task of video capture according to the temporary needs of the scenic spot.
  • FIG. 25 is a schematic diagram of an application scenario of a photographing device according to the twelfth embodiment of the present application
  • FIG. 26 is a schematic diagram of a photographing device according to the twelfth embodiment of the present application.
  • this embodiment may be called a multi-section rod
  • this embodiment can be applied to the entrance of a scenic spot.
  • the target person passes through the photoelectric switch sensing area, and in response to the photoelectric switch sensing signal, the cylinder performs an upward movement to drive the video capture device to shoot the target person.
  • a travel switch is arranged at the top of the vertical pole, and the travel switch signal triggers the cylinder to perform a downward movement.
  • This embodiment is a power air system composed of an air compressor, an oil-water separator, an electromagnetic reversing valve, and an air cylinder, which drives the video capture device to move up and down.
  • the driving mode is an air cylinder
  • the trajectory is reciprocating
  • the trigger mode is a photoelectric switch
  • the movement mode is a multi-section air cylinder.
  • the mobile video acquisition device is set on the top of the multi-section cylinder.
  • the pneumatic pipeline electromagnetic reversing valve controls the expansion and contraction of the multi-section cylinder.
  • the electromagnetic reversing valve responds to the off signal of the optoelectronic switch to control the elasticity of the cylinder; the electromagnetic reversing valve In response to the closing signal of the limit switch, the cylinder is controlled to contract.
  • the mobile video acquisition device starts to capture the first frame of the video clip at the time when the through-beam photoelectric switch is turned off, and the time point after the time when the travel switch is closed is the end of the captured video clip. frame.
  • the generated video clips are saved correspondingly to the feature identification marks of the badge.
  • the image feature identification of the badge identified in the file summary of the generated video file.
  • the multi-section cylinder is triggered to stretch out and start to execute the mobile acquisition video operation, and the autonomous shooting angle can be gradually increased from the horizontal angle of view containing tourists to the high-altitude angle of view of the panoramic view of the scenic area, realizing a presentation that cannot be achieved by conventional shooting methods effect.
  • FIG. 27 is a schematic diagram of an application scenario of a photographing device according to Embodiment 13 of the present application.
  • FIG. 28 is a schematic diagram of a photographing device according to Embodiment 13 of the present application.
  • this embodiment may be called a two-way slide .
  • This embodiment can be applied to an art gallery.
  • the mobile video acquisition device is driven by a two-way slide to track and shoot.
  • a power supply cable and a rack are arranged on the first rail, which respectively mesh with the power-taking carbon brush and the output shaft gear of the first servo motor.
  • the second track is arranged on the first sliding table, and the second track is arranged with a rack gear which meshes with the gear of the output shaft of the second servo motor.
  • a connection line is provided between the first servo motor and the second servo motor, and is wrapped in the protective drag chain.
  • the video capture device is arranged on the second sliding table.
  • the driving mode of this embodiment is a synchronous belt and a synchronous wheel, the trajectory is a tracking-programming trajectory, the trigger mode is a portrait analysis, and the movement form is a sliding rail.
  • the mobile video acquisition device is configured to program multiple sets of predetermined trajectories corresponding to the multi-angle shooting of the artwork.
  • the mobile video acquisition device responds to the portrait analysis of the video image of the back-end server, tracks and acquires the activity video of the person for a fixed period of time and executes the predetermined trajectory programming of the artwork that the person looks at or the artwork near the person, and continues to respond to the background after the execution of the predetermined trajectory programming is completed.
  • the server's video image portrait analysis tracks people for a fixed period of time.
  • Extract video clips from the continuously captured video stream take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point
  • the fixed time point is the end frame of the video clip. Extract video segments from different facial feature recognition subjects, and save the extracted video segments corresponding to the facial feature recognition identifiers.
  • the file name of the generated video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • FIG. 29 is a schematic diagram of an application scenario of a photographing device according to Embodiment 14 of the present application.
  • FIG. 30 is a schematic diagram of a photographing device according to Embodiment 14 of the present application.
  • the video capture device is set at the bottom end of the water storage bucket. If the water level in the water storage bucket reaches a predetermined water level, the video capture device will move upward as the center of gravity of the water storage bucket shifts; After the water in the bucket is emptied, the video capture device moves down with the return of the water bucket.
  • the shooting state may include: a bucket, a rotating shaft, a water valve, a video capture device, and a counterweight.
  • the driving mode is water energy
  • the trajectory is water pressure back and forth
  • the trigger mode is water flow
  • the movement mode is When a single rotating shaft rotates and there is water flow, the water storage bucket performs repeated swings relative to the rotating shaft, and the mobile video acquisition device at one end of the water storage bucket moves up and down in an arc with the rotating shaft as the center.
  • Extract video clips from the continuously captured video stream take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point
  • the fixed time point is the end frame of the video clip.
  • the extracted video clips are saved corresponding to the facial feature recognition identifiers.
  • the file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • the autonomous shooting angle of view can be gradually increased from the horizontal angle of view containing tourists to the high-altitude angle of view of the overview of the panoramic view area, realizing a presentation effect that cannot be achieved by conventional shooting methods.
  • FIG. 31 is a schematic diagram of an application scenario of a photographing device according to Embodiment 15 of the present application.
  • FIG. 32 is a schematic diagram of a photographing device according to Embodiment 15 of the present application. As shown in FIG. 31 and FIG. 32, this embodiment may be called a turntable type. This embodiment can be applied to a water park, where multiple video capture devices are arranged on a rotating water wheel, and multiple video capture devices perform continuous shooting at the same time.
  • the photographing device in this embodiment may include: a waterwheel, a water valve, a video capture device, and a counterweight.
  • the driving mode of this embodiment is water energy
  • the trajectory is cyclic
  • the trigger mode is water flow
  • the movement mode is single-rotation shaft rotation.
  • the water storage bucket phase drives the runner to perform circular movement, and a plurality of mobile video acquisition devices arranged on the runner move circularly with respect to the rotating shaft.
  • Extract video clips from the continuously captured video stream take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point
  • the fixed time point is the end frame of the video clip.
  • the autonomous shooting angle of view can be gradually increased from the horizontal angle of view that includes tourists to the high-altitude angle of view that overlooks the entire scenery of the scenic area, and achieves a presentation effect that cannot be achieved by conventional shooting methods.
  • FIG. 33 is a schematic diagram of an application scenario of a photographing device according to the sixteenth embodiment of the present application
  • FIG. 34 is a schematic diagram of a photographing device according to the sixteenth embodiment of the present application.
  • this embodiment may be referred to as a multi-axis pan
  • the arm this embodiment can be used for a spiral ladder, and the video capture device tracks the movement of the target person walking on the spiral ladder.
  • the first stepping motor drives the first rocker arm to rotate
  • the second stepping motor drives the second rocker arm to rotate.
  • the second rocker arm rotating shaft is arranged on the first rocker arm.
  • the driving mode is a stepping motor
  • the trajectory is a tracking trajectory
  • the trigger mode is a portrait analysis
  • the movement mode is an early joint rotation.
  • the turntable and the rocker arm respond to the portrait analysis of the video image of the background server, and the rocker turntable makes the video acquisition device track the movement of the head position of the person after the person enters the target area.
  • the mobile video acquisition device In response to the portrait analysis of the video image by the back-end server, the mobile video acquisition device extracts a fixed time point before the time point when the portrait appears in the predetermined area as the first frame of the video clip, and the time point when the portrait leaves the predetermined area as the video clip End frame. Save the generated video clip corresponding to the facial feature recognition identifier.
  • the file name of the generated video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • the mobile video acquisition device of this embodiment tracks the target person for shooting, and achieves a presentation effect that cannot be achieved by conventional shooting means.
  • FIG. 35 is a schematic diagram of the first application scenario of a photographing device according to the seventeenth embodiment of the application
  • FIG. 36 is a schematic diagram of the second application scenario of the photographing device according to the seventeenth embodiment of the present application
  • FIG. 37 is a schematic diagram of the photographing device according to the seventeenth embodiment of the present application
  • this embodiment can be called a sliding table rocker combination.
  • This embodiment can be applied to the welcome lane of a scenic spot.
  • a welcome device is installed on the welcome lane of the scenic spot. Waving the flagpole, the welcoming device is also equipped with a video capture device.
  • the first cylinder controls the telescopic movement of the scissor and fork mechanism on the sliding table
  • the second cylinder controls the rocker arm to swing up and down.
  • the photographing device in this embodiment may include: an air pump, an oil-water separator, a first cylinder, a first electromagnetic directional valve, a second electromagnetic directional valve, a second cylinder, a third electromagnetic directional valve, and a fourth electromagnetic directional valve .
  • the driving mode of this embodiment is an air cylinder
  • the trajectory is a random and disordered trajectory
  • the trigger mode is a master control switch
  • the movement form is a linear reciprocating and single-joint rotation of the sliding table.
  • the slide table is set to the expansion control of the first cylinder
  • the rocker arm is set to the expansion control of the second cylinder.
  • the first solenoid reversing valve controls the extension of the first cylinder
  • the second solenoid reversing valve controls the contraction of the first cylinder
  • the first solenoid reversing valve and the second solenoid reversing valve are set within 5-20 seconds Randomly select a certain time as the two reversing valve commands to alternately execute and control the operating time of the first cylinder.
  • the first electromagnetic reversing valve ventilates the cylinder, and the second electromagnetic reversing valve exhausts outwards, so that the sliding table moves to one end; after 8 seconds, the second time sequence is entered; random selection 17 seconds is the second time sequence.
  • the first solenoid reversing valve and the second solenoid reversing valve execute commands interchange, the first solenoid reversing valve exhausts outwards, and the second solenoid reversing valve ventilates the first cylinder to achieve sliding
  • the stage moves in the opposite direction, and the third sequence is performed after 17 seconds.
  • the third solenoid reversing valve controls the extension of the second cylinder
  • the fourth solenoid reversing valve controls the contraction of the second cylinder
  • the third solenoid reversing valve and the fourth solenoid reversing valve are set within 3-12 seconds Randomly select a certain time as the two reversing valve commands to alternately execute and control the operating time of the second cylinder. For example, randomly select 5 seconds as the first time sequence, the third electromagnetic reversing valve ventilates the cylinder, and the fourth electromagnetic reversing valve exhausts outwards to realize the upward movement of the rocker arm; after 5 seconds, it enters the second time charge; random selection 9 seconds is the second time sequence.
  • the third solenoid reversing valve and the fourth solenoid reversing valve execute commands interchange, the third solenoid reversing valve exhausts outward, and the fourth solenoid reversing valve ventilates the second cylinder to achieve shaking.
  • the arm moves down, and the third sequence is performed after 9 seconds.
  • the video capture device follows the sliding table and rocker arm to move randomly; after the main control switch is turned off, the video capture device stops moving.
  • the file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • the autonomous shooting angle of view can be gradually increased from the horizontal angle of view that includes tourists to the high-altitude angle of view that overlooks the entire scenery of the scenic area, and achieves a presentation effect that cannot be achieved by conventional shooting methods.
  • FIG. 38 is a schematic diagram of the video processing device according to the embodiment of the present application, as shown in FIG. 38 ,
  • the video processing device includes: a first receiving unit 3801, a first acquiring unit 3803, and an extracting unit 3805.
  • the video processing device will be described in detail below.
  • the first receiving unit 3801 is used to receive a trigger signal, where the trigger signal is used to trigger the camera device to shoot, the camera device includes a driving device and a camera, the driving device is used to drive the camera in the camera to move; the camera is used to move Video shooting during the process.
  • the first acquisition unit 3803 is configured to acquire a video captured by the camera under the trigger of a trigger signal.
  • the extraction unit 3805 is configured to extract the identity information corresponding to the person in the video, and save the corresponding relationship between the identity information and the video.
  • first receiving unit 3801, first obtaining unit 3803, and extracting unit 3805 correspond to steps S102 to S106 in the first embodiment, and the above-mentioned units and corresponding steps implement the same examples and application scenarios. But it is not limited to the content disclosed in the first embodiment. It should be noted that, as a part of the device, the above-mentioned units can be executed in a computer system such as a set of computer-executable instructions.
  • the first receiving unit can be used to receive the trigger signal, where the trigger signal is used to trigger the camera device to shoot, and the camera device includes a driving device and a camera, and the driving device is used to drive the camera device.
  • the camera moves in the mobile; the camera is used for video shooting during the movement; then the first acquisition unit is used to acquire the video captured by the camera under the trigger of the trigger signal; then the extraction unit is used to extract the identity information corresponding to the person in the video , And save the corresponding relationship between the identity information and the video.
  • the video processing device in this application it is realized that the video containing the user and the identification information of the person recognized from the video are correspondingly saved in advance, and the user is obtained from the pre-stored video library according to the user's identification information.
  • the purpose of the video is to achieve the technical effect of improving the reliability of the video processing method, improve the user experience, and then solve the technical problem of low reliability of the video processing method in the related technology.
  • the video processing device further includes: a sensing unit configured to sense that someone is present in the shooting range of the camera device through a sensor provided on the camera device before receiving the trigger signal; The unit is used to send out a trigger signal in response to the sensing of the sensor.
  • the senor is at least one of the following: an infrared sensing unit, a radio frequency sensing unit, and a radar detection unit.
  • the video processing device further includes: a second receiving unit, configured to receive a user's operation of a switch in the camera device before receiving the trigger signal; and a second response unit, configured to respond to Operate to issue a trigger signal.
  • the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
  • the video processing device further includes: a third receiving unit, configured to receive a user's operation on the software interface before receiving the trigger signal; and a third response unit, configured to respond to the operation, Send a trigger signal to the camera via the network.
  • the device further includes: a third obtaining unit, configured to obtain the identification information of the camera device by scanning a graphical code set on the camera device before receiving the user's operation on the software interface;
  • the display unit is used to display the operations that can be performed on the camera device on the software interface according to the identification information.
  • the video processing apparatus further includes: a fourth obtaining unit, configured to obtain the information of the handheld device when the software interface displays the human handheld device before receiving the user's operation on the software interface Geographic location information; a display unit for displaying on the software interface the camera devices that can be controlled by a person within a predetermined range from the handheld device and the operations that can be performed on the camera device on the software interface according to the geographic location information.
  • a fourth obtaining unit configured to obtain the information of the handheld device when the software interface displays the human handheld device before receiving the user's operation on the software interface Geographic location information
  • a display unit for displaying on the software interface the camera devices that can be controlled by a person within a predetermined range from the handheld device and the operations that can be performed on the camera device on the software interface according to the geographic location information.
  • the video processing device further includes: a second acquiring unit, configured to extract the identity information corresponding to the person in the video from the video, and after storing the corresponding relationship between the identity information and the video, acquire the to-be-extracted The identification information of the video, and search for one or more videos corresponding to the identification information of the video to be extracted in the saved videos; the display unit is used to display one or more videos to the identification of the video to be extracted The user corresponding to the message.
  • a second acquiring unit configured to extract the identity information corresponding to the person in the video from the video, and after storing the corresponding relationship between the identity information and the video, acquire the to-be-extracted The identification information of the video, and search for one or more videos corresponding to the identification information of the video to be extracted in the saved videos
  • the display unit is used to display one or more videos to the identification of the video to be extracted The user corresponding to the message.
  • the extraction unit is used to identify the attachments and/or the biological characteristics of the person from the person; the characteristic information of the attachments and/or the characteristic information of the biological characteristics are used as the Identify the identity information of the person; the second acquisition unit is used to acquire the characteristic information of the attachments of the person and/or the identity information of the biological characteristics of the person, and the characteristic information of the attachments and/or the characteristic information corresponding to the biological characteristics Identification information determined as a person; wherein the attachment includes at least one of the following: clothing, accessories, and hand-held objects; the biological characteristics include at least one of the following: facial features, posture characteristics; the attachment is used to uniquely identify a person in a predetermined area.
  • the extraction unit is also used to extract the sensed radio frequency identification information from the radio frequency signal; use the radio frequency identification information as the identity information for identifying the person; extract the identity mark that identifies the person from the network trigger signal Information; the second acquisition unit is also used to extract the sensed radio frequency identification information from the radio frequency signal, and determine the radio frequency identification information as the person's identity information; extract the person's identity information from the network trigger signal.
  • the extraction unit when the number of people in the video is multiple, includes: a first recognition module, configured to recognize attachments and/or attachments on each person from the multiple people Or biological characteristics; the first storage module is used to determine the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each of the multiple persons, and the identification information of each of the multiple persons The corresponding relationship between a person's identification information and the video is saved.
  • a first recognition module configured to recognize attachments and/or attachments on each person from the multiple people Or biological characteristics
  • the first storage module is used to determine the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each of the multiple persons, and the identification information of each of the multiple persons The corresponding relationship between a person's identification information and the video is saved.
  • the first saving module includes: a first determining sub-module, configured to determine the time node at which the identification information of each of the multiple people is recognized in the video; and a second determining sub-module , Used to use the time node as the time label of the identification information of each of multiple people; save sub-module, used to identify the identification information of each of multiple people and the identification information of each of multiple people The corresponding relationship between the added time label and the video is saved.
  • the movement trajectory of the camera includes at least one of the following: a reciprocating movement trajectory between a predetermined starting point and a predetermined end point, a cyclic movement trajectory of a predetermined path, a movement trajectory designed based on a predetermined programming program, and following the target The tracking movement trajectory of the object.
  • the movement of the camera is at least one of the following: orbital movement and rotational movement.
  • the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving.
  • a storage medium includes a stored program, wherein the program executes any one of the above-mentioned video processing methods.
  • a processor which is configured to run a program, wherein the video processing method of any one of the above is executed when the program is running.
  • the disclosed technical content can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units may be a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes. .

Abstract

Disclosed in the present application are a video processing method and device. The method comprises: receiving a trigger signal, wherein the trigger signal is used for triggering a camera device to perform photographing, the camera device comprising a driving device and a camera, the driving device being used for driving the camera in the camera device to move, and the camera being used for performing video capture during the movement; obtaining a video captured by the camera under the trigger of the trigger signal; extracting the identification information corresponding to a person in the video, and saving a corresponding relationship between the identification information and the video; obtaining the identification information of a video to be extracted, and searching the saved video for one or more videos corresponding to the identification information of the video to be extracted; and displaying the one or more videos to the user corresponding to the identification information of the video to be extracted. The present application solves the technical problem in the related art of low reliability of video processing methods.

Description

视频处理方法及装置Video processing method and device 技术领域Technical field
本申请涉及视频处理技术领域,具体而言,涉及一种视频处理方法及装置。This application relates to the technical field of video processing, and in particular to a video processing method and device.
背景技术Background technique
目前,获取视频的方式主要是使用相机、手机、摄像机以及无人机等手段自主进行获取,缺乏一种在提供高质量视频服务的方法和手段。At present, the way to obtain video is mainly to use cameras, mobile phones, video cameras, and drones to obtain them independently. There is a lack of methods and means to provide high-quality video services.
高质量的视频除了需要有成像效果好的视频采集设备,还需要有辅助拍摄手段。比如使用传统的固定式摄像头获取的视频,背景死板单一,缺乏动态效果,无法实现理想的效果呈现。比如拍摄电影时,摄影师通常需要架设轨道车和拍摄悬臂等设施,从而实现理想的效果呈现。High-quality video requires not only video capture equipment with good imaging effects, but also auxiliary shooting means. For example, the video captured using a traditional fixed camera has a rigid background and lacks dynamic effects, which makes it impossible to achieve an ideal presentation. For example, when shooting a movie, photographers usually need to set up rail cars and shooting cantilevers to achieve the ideal effect.
如果把专业电影拍摄的轨道车和拍摄悬臂等设施移植到景区内为游客提供采集视频的服务,一方面需要配备摄影人员提供人工服务,另一方面生成大量视频片段通过人工比对拣选,再将所生成的视频交付给被拍摄的用户。人工比对拣选视频效率低,如果生成视频数量超过人工拣选工作量,则影响服务效果。If the railcars and shooting cantilevers for professional movie shooting are transplanted into the scenic spot to provide tourists with video capture services, on the one hand, photographers need to be equipped to provide manual services, on the other hand, a large number of video clips will be generated through manual comparison and selection, and then The generated video is delivered to the user who was filmed. Manual comparison of video picking is inefficient. If the number of generated videos exceeds the workload of manual picking, the service effect will be affected.
针对上述相关技术中的视频处理方式可靠性较低的问题,目前尚未提出有效的解决方案。In view of the low reliability of the video processing methods in the above-mentioned related technologies, no effective solution has been proposed at present.
发明内容Summary of the invention
本申请实施例提供了一种视频处理方法及装置,以至少解决相关技术中的视频处理方式可靠性较低的技术问题。The embodiments of the present application provide a video processing method and device to at least solve the technical problem of low reliability of video processing methods in the related art.
根据本申请实施例的一个方面,提供了一种视频处理方法,包括:接收触发信号,其中,所述触发信号用于触发摄像装置进行拍摄,所述摄像装置中包括驱动装置和摄像头,所述驱动装置用于驱动所述摄像装置中的摄像头移动;所述摄像头用于在移动的过程中进行视频拍摄;获取所述摄像头在所述触发信号的触发下拍摄的视频;从所述视频中提取所述视频中的人所对应的身份标识信息,并将所述身份标识信息与所述视频的对应关系进行保存。According to one aspect of the embodiments of the present application, there is provided a video processing method, including: receiving a trigger signal, wherein the trigger signal is used to trigger a camera device to shoot, and the camera device includes a driving device and a camera. The driving device is used for driving the camera in the camera device to move; the camera is used for video shooting during the movement; acquiring the video captured by the camera under the trigger of the trigger signal; extracting from the video The identity information corresponding to the person in the video is stored, and the corresponding relationship between the identity information and the video is stored.
可选地,在接收到所述触发信号之前,所述视频处理方法还包括:通过所述摄像 装置上设置的传感器感应到有人出现在所述摄像装置的拍摄范围内;响应于所述传感器的感应,发出所述触发信号。Optionally, before receiving the trigger signal, the video processing method further includes: sensing that a person is present in the shooting range of the camera device through a sensor provided on the camera device; Induction, and send out the trigger signal.
可选地,所述传感器为以下至少之一:红外感应单元,射频感应单元,雷达探测单元。Optionally, the sensor is at least one of the following: an infrared sensing unit, a radio frequency sensing unit, and a radar detection unit.
可选地,在接收到所述触发信号之前,所述视频处理方法还包括:接收用户对所述摄像装置中的开关的操作;响应于所述操作,发出所述触发信号。Optionally, before receiving the trigger signal, the video processing method further includes: receiving a user's operation of a switch in the camera device; and in response to the operation, sending the trigger signal.
可选地,所述开关为以下至少之一:按键开关单元,触摸开关单元,光电开关单元。Optionally, the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
可选地,在接收到所述触发信号之前,所述视频处理方法还包括:接收用户在软件界面上的操作;响应于所述操作,通过网络向所述摄像装置发出所述触发信号。Optionally, before receiving the trigger signal, the video processing method further includes: receiving a user's operation on a software interface; in response to the operation, sending the trigger signal to the camera device via a network.
可选地,在接收用户在软件界面上的操作之前,所述视频处理方法还包括:通过扫描设置在所述摄像装置上的图形化码获取所述摄像装置的标识信息;根据所述标识信息在所述软件界面上显示能够对所述摄像装置所进行的操作。Optionally, before receiving the user's operation on the software interface, the video processing method further includes: acquiring the identification information of the camera device by scanning a graphical code set on the camera device; and according to the identification information The operations that can be performed on the camera device are displayed on the software interface.
可选地,在从所述视频中提取所述视频中的人所对应的身份标识信息,并将所述身份标识信息与所述视频的对应关系进行保存之后,所述方法还包括:获取待提取视频的身份识别信息,并在已经保存的视频中查找与所述待提取视频的身份识别信息所对应的一个或多个视频;将所述一个或多个视频显示给所述待提取视频的身份识别信息所对应的用户。Optionally, after extracting the identification information corresponding to the person in the video from the video, and saving the corresponding relationship between the identification information and the video, the method further includes: acquiring Extract the identification information of the video, and search the saved videos for one or more videos corresponding to the identification information of the video to be extracted; display the one or more videos to the video to be extracted The user corresponding to the identification information.
可选地,在接收用户在软件界面上的操作之前,所述视频处理方法还包括:在所述软件界面显示所述人的手持设备的情况下,获取所述手持设备的地理位置信息;根据所述地理位置信息在所述软件界面上显示距离所述手持设备预定范围内的所述人能够控制的所述摄像装置,以及能过对所述摄像装置所进行的操作。Optionally, before receiving the user's operation on the software interface, the video processing method further includes: acquiring the geographic location information of the handheld device when the software interface displays the human handheld device; The geographic location information displays on the software interface the camera devices that can be controlled by the person within a predetermined range from the handheld device, and the operations that can be performed on the camera device.
可选地,从所述视频中提取所述视频中的人所对应的身份标识信息包括:从射频信号中提取感应到的射频标识信息;将所述射频标识信息作为用于标识该人的身份标识信息;从网络触发信号中提取得到标识该人的身份标识信息;获取待提取视频的身份识别信息包括:获取所述射频信号中提取感应到的射频标识信息,将所述射频标识信息确定为所述人的身份标识信息;从所述网络触发信号中提取得到标识该人的身份标识信息。Optionally, extracting the identification information corresponding to the person in the video from the video includes: extracting the sensed radio frequency identification information from the radio frequency signal; using the radio frequency identification information as the identity for identifying the person Identification information; extracting from the network trigger signal the identification information that identifies the person; acquiring the identification information of the video to be extracted includes: acquiring the radio frequency signal and extracting the sensed radio frequency identification information, and determining the radio frequency identification information as The identification information of the person; the identification information that identifies the person is extracted from the network trigger signal.
可选地,从所述视频中提取所述视频中的人所对应的身份标识信息包括:从所述人中识别出所述人身上的附着物和/或所述人的生物特征;将所述附着物的特征信息和 /或所述生物特征的特征信息作为用于标识该人的身份标识信息;获取待提取视频的身份识别信息包括:获取所述人的附着物的特征信息和/或所述人的生物特征的身份标识信息,将所述附着物的特征信息和/或所述生物特征对应的特征信息确定为所述人的身份标识信息;其中,所述附着物包括以下至少之一:服装、饰品、手持物品;所述生物特征包括以下至少之一:面部特征,体态特征;所述附着物用于在预定区域唯一标识所述人。Optionally, extracting the identification information corresponding to the person in the video from the video includes: recognizing attachments on the person and/or biological characteristics of the person from the person; The feature information of the attachment and/or the feature information of the biological feature is used as the identification information for identifying the person; obtaining the identification information of the video to be extracted includes: obtaining the feature information and/or of the attachment of the person The identification information of the biological characteristics of the person, and the characteristic information of the attachment and/or the characteristic information corresponding to the biological characteristics are determined as the identification information of the person; wherein the attachment includes at least one of the following One: clothing, accessories, hand-held items; the biological characteristics include at least one of the following: facial characteristics, posture characteristics; the attachment is used to uniquely identify the person in a predetermined area.
可选地,在所述视频中的人的数量为多个的情况下,从所述视频中提取所述视频中的人所对应的身份标识信息包括:从所述多个人中识别出每一个人身上的附着物和/或生物特征;将所述每一个人身上的附着物的特征信息和/或生物特征对应的特征信息确定为所述多个人中每一个人的身份标识信息,并将所述多个人中每一个人的身份识别信息与所述视频的对应关系进行保存。Optionally, in a case where there are multiple persons in the video, extracting the identity information corresponding to the persons in the video from the video includes: identifying each of the multiple persons The attachments and/or biological characteristics of a person; the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics are determined as the identification information of each of the multiple persons, and The corresponding relationship between the identification information of each of the multiple persons and the video is stored.
可选地,将所述多个人中每一个人的身份识别信息与所述视频的对应关系进行保存包括:确定所述多个人中每一个人的身份识别信息在所述视频中被识别出的时间节点;将所述时间节点作为所述多个人中每一个人的身份识别信息的时间标签;将所述多个人中每一个人的身份识别信息以及为所述多个人中每一个人的身份识别信息添加的时间标签与所述视频的对应关系进行保存。Optionally, storing the corresponding relationship between the identification information of each of the plurality of people and the video includes: determining that the identification information of each of the plurality of people is identified in the video Time node; use the time node as the time label of the identification information of each of the plurality of people; use the identification information of each of the plurality of people as the identity of each of the plurality of people The corresponding relationship between the time tag added by the identification information and the video is saved.
可选地,所述摄像头的移动轨迹包括以下至少之一:预定起点和预定终点之间的往返移动轨迹,预定路径的循环移动轨迹,基于预定编程程序设计的移动轨迹,跟随目标对象的跟踪移动轨迹。Optionally, the movement trajectory of the camera includes at least one of the following: a reciprocating movement trajectory between a predetermined starting point and a predetermined end point, a cyclic movement trajectory of a predetermined path, a movement trajectory designed based on a predetermined programming program, following the tracking movement of the target object Trajectory.
可选地,所述摄像头的移动方式为以下至少之一:轨道式移动,旋转式移动。Optionally, the movement of the camera is at least one of the following: orbital movement and rotation movement.
可选地,所述驱动装置的驱动方式为以下至少之一:机械驱动,电磁驱动,压力驱动。Optionally, the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving.
根据本申请实施例的另外一个方面,还提供了一种视频处理装置,包括:第一接收单元,用于接收触发信号,其中,所述触发信号用于触发摄像装置进行拍摄,所述摄像装置中包括驱动装置和摄像头,所述驱动装置用于驱动所述摄像装置中的摄像头移动;所述摄像头用于在移动的过程中进行视频拍摄;第一获取单元,用于获取所述摄像头在所述触发信号的触发下拍摄的视频;提取单元,用于从所述视频中提取所述视频中的人所对应的身份标识信息,并将所述身份标识信息与所述视频的对应关系进行保存。According to another aspect of the embodiments of the present application, there is also provided a video processing device, including: a first receiving unit, configured to receive a trigger signal, wherein the trigger signal is used to trigger a camera device to shoot, and the camera device It includes a driving device and a camera, the driving device is used to drive the camera in the camera device to move; the camera is used for video shooting during the movement; the first acquisition unit is used to acquire the camera in the The video taken under the trigger of the trigger signal; the extraction unit is used to extract the identity information corresponding to the person in the video from the video, and save the corresponding relationship between the identity information and the video .
可选地,所述视频处理装置还包括:感应单元,用于在接收到所述触发信号之前,通过所述摄像装置上设置的传感器感应到有人出现在所述摄像装置的拍摄范围内;第 一响应单元,用于响应于所述传感器的感应,发出所述触发信号。Optionally, the video processing device further includes: a sensing unit, configured to sense that a person is present in the shooting range of the camera device through a sensor provided on the camera device before the trigger signal is received; A response unit is used for sending out the trigger signal in response to the sensing of the sensor.
可选地,所述传感器为以下至少之一:红外感应单元,射频感应单元,雷达探测单元。Optionally, the sensor is at least one of the following: an infrared sensing unit, a radio frequency sensing unit, and a radar detection unit.
可选地,所述视频处理装置还包括:第二接收单元,用于在接收到所述触发信号之前,接收用户对所述摄像装置中的开关的操作;第二响应单元,用于响应于所述操作,发出所述触发信号。Optionally, the video processing device further includes: a second receiving unit, configured to receive a user's operation of a switch in the camera device before receiving the trigger signal; and a second response unit, configured to respond to The operation sends the trigger signal.
可选地,所述开关为以下至少之一:按键开关单元,触摸开关单元,光电开关单元。Optionally, the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
可选地,所述视频处理装置还包括:第三接收单元,用于在接收到所述触发信号之前,接收用户在软件界面上的操作;第三响应单元,用于响应于所述操作,通过网络向所述摄像装置发出所述触发信号。Optionally, the video processing device further includes: a third receiving unit, configured to receive a user's operation on the software interface before receiving the trigger signal; a third response unit, configured to respond to the operation, The trigger signal is sent to the camera device via the network.
可选地,所述装置还包括:第三获取单元,用于在接收用户在软件界面上的操作之前,通过扫描设置在所述摄像装置上的图形化码获取所述摄像装置的标识信息;显示单元,用于根据所述标识信息在所述软件界面上显示能够对所述摄像装置所进行的操作。Optionally, the device further includes: a third obtaining unit, configured to obtain the identification information of the camera device by scanning a graphical code set on the camera device before receiving the user's operation on the software interface; The display unit is configured to display the operations that can be performed on the camera device on the software interface according to the identification information.
可选地,所述视频处理装置还包括:第四获取单元,用于在接收用户在软件界面上的操作之前,在所述软件界面显示所述人的手持设备的情况下,获取所述手持设备的地理位置信息;显示单元,用于根据所述地理位置信息在所述软件界面上显示距离所述手持设备预定范围内的所述人能够控制的所述摄像装置,以及能过对所述摄像装置所进行的操作。Optionally, the video processing apparatus further includes: a fourth acquiring unit, configured to acquire the handheld device when the software interface displays the human handheld device before receiving the user's operation on the software interface Geographic location information of the device; a display unit for displaying on the software interface the camera device that can be controlled by the person within a predetermined range of the handheld device according to the geographic location information, and the The operation performed by the camera.
可选地,所述装置还包括:第二获取单元,用于在从所述视频中提取所述视频中的人所对应的身份标识信息,并将所述身份标识信息与所述视频的对应关系进行保存之后,获取待提取视频的身份识别信息,并在已经保存的视频中查找与所述待提取视频的身份识别信息所对应的一个或多个视频;显示单元,用于将所述一个或多个视频显示给所述待提取视频的身份识别信息所对应的用户。Optionally, the device further includes: a second acquiring unit, configured to extract from the video the identity information corresponding to the person in the video, and to correlate the identity information with the video After the relationship is saved, the identification information of the video to be extracted is obtained, and one or more videos corresponding to the identification information of the video to be extracted are searched in the saved videos; the display unit is used to display the one or more Or multiple videos are displayed to the user corresponding to the identification information of the video to be extracted.
可选地,所述提取单元,用于从所述人中识别出所述人身上的附着物和/或所述人的生物特征;将所述附着物的特征信息和/或所述生物特征的特征信息作为用于标识该人的身份标识信息;所述第二获取单元,用于获取所述人的附着物的特征信息和/或所述人的生物特征的身份标识信息,将所述附着物的特征信息和/或所述生物特征对应的特征信息确定为所述人的身份标识信息;其中,所述附着物包括以下至少之一:服装、饰品、手持物品;所述生物特征包括以下至少之一:面部特征,体态特征;所述附着 物用于在预定区域唯一标识所述人物。Optionally, the extraction unit is configured to identify the attachments on the person and/or the biological characteristics of the person from the person; and combine the characteristic information of the attachments and/or the biological characteristics of the person The characteristic information of the person is used as the identification information for identifying the person; the second acquiring unit is used to acquire the characteristic information of the person’s attachments and/or the identity information of the person’s biological characteristics, and the The characteristic information of the attachment and/or the characteristic information corresponding to the biological characteristic is determined to be the identification information of the person; wherein the attachment includes at least one of the following: clothing, accessories, and handheld objects; the biological characteristics include At least one of the following: facial features, posture features; the attachment is used to uniquely identify the person in a predetermined area.
可选地,所述提取单元,还用于从射频信号中提取感应到的射频标识信息;将所述射频标识信息作为用于标识该人的身份标识信息;从网络触发信号中提取得到标识该人的身份标识信息;所述第二获取单元,还用于获取所述射频信号中提取感应到的射频标识信息,将所述射频标识信息确定为所述人的身份标识信息;从所述网络触发信号中提取得到标识该人的身份标识信息。Optionally, the extraction unit is further configured to extract the sensed radio frequency identification information from the radio frequency signal; use the radio frequency identification information as the identification information for identifying the person; extract the identification information from the network trigger signal. The identification information of the person; the second acquisition unit is further configured to acquire the radio frequency identification information sensed from the radio frequency signal, and determine the radio frequency identification information as the identification information of the person; from the network The trigger signal is extracted to obtain the identification information that identifies the person.
可选地,在所述视频中的人的数量为多个的情况下,所述提取单元包括:第一识别模块,用于从所述多个人中识别出每一个人身上的附着物和/或生物特征;第一保存模块,用于将所述每一个人身上的附着物的特征信息和/或生物特征对应的特征信息确定为所述多个人中每一个人的身份标识信息,并将所述多个人中每一个人的身份识别信息与所述视频的对应关系进行保存。Optionally, when the number of people in the video is multiple, the extraction unit includes: a first identification module, configured to identify attachments and/or attachments on each person from the multiple people Or biological characteristics; the first storage module is used to determine the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each of the multiple persons, and The corresponding relationship between the identification information of each of the multiple persons and the video is stored.
可选地,所述第一保存模块包括:第一确定子模块,用于确定所述多个人中每一个人的身份识别信息在所述视频中被识别出的时间节点;第二确定子模块,用于将所述时间节点作为所述多个人中每一个人的身份识别信息的时间标签;保存子模块,用于将所述多个人中每一个人的身份识别信息以及为所述多个人中每一个人的身份识别信息添加的时间标签与所述视频的对应关系进行保存。Optionally, the first saving module includes: a first determining submodule, configured to determine the time node at which the identification information of each of the plurality of people is recognized in the video; and a second determining submodule , Used to use the time node as the time label of the identification information of each of the plurality of people; a save sub-module for storing the identification information of each of the plurality of people and the identification information of each of the plurality of people The corresponding relationship between the time tag added to each individual's identification information and the video is stored.
可选地,所述摄像头的移动轨迹包括以下至少之一:预定起点和预定终点之间的往返移动轨迹,预定路径的循环移动轨迹,基于预定编程程序设计的移动轨迹,跟随目标对象的跟踪移动轨迹。Optionally, the movement trajectory of the camera includes at least one of the following: a reciprocating movement trajectory between a predetermined starting point and a predetermined end point, a cyclic movement trajectory of a predetermined path, a movement trajectory designed based on a predetermined programming program, following the tracking movement of the target object Trajectory.
可选地,所述摄像头的移动方式为以下至少之一:轨道式移动,旋转式移动。Optionally, the movement of the camera is at least one of the following: orbital movement and rotation movement.
可选地,所述驱动装置的驱动方式为以下至少之一:机械驱动,电磁驱动,压力驱动。Optionally, the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving.
根据本申请实施例的另外一个方面,还提供了一种存储介质,所述存储介质包括存储的程序,其中,所述程序执行上述中任意一项所述的视频处理方法。According to another aspect of the embodiments of the present application, a storage medium is also provided, the storage medium includes a stored program, wherein the program executes the video processing method described in any one of the foregoing.
根据本申请实施例的另外一个方面,还提供了一种处理器,所述处理器用于运行程序,其中,所述程序运行时执行上述中任意一项所述的视频处理方法。According to another aspect of the embodiments of the present application, a processor is also provided, the processor is configured to run a program, wherein the video processing method described in any one of the above is executed when the program is running.
在本申请实施例中,采用接收触发信号,其中,触发信号用于触发摄像装置进行拍摄,摄像装置中包括驱动装置和摄像头,驱动装置用于驱动摄像装置中的摄像头移动;摄像头用于在移动的过程中进行视频拍摄;获取摄像头在触发信号的触发下拍摄的视频;提取视频中的人所对应的身份标识信息,并将身份标识信息与视频的对应关 系进行保存,通过本申请中的视频处理方法,实现了预先将包含有用户的视频与从视频出识别的人物的身份标识信息进行对应保存,并根据用户的身份标识信息从预先存储的视频库中查询得到用户的视频的目的,达到了提高视频处理方式的可靠性的技术效果,提升了用户体验,进而解决了相关技术中的视频处理方式可靠性较低的技术问题。In the embodiments of the present application, the trigger signal is received, where the trigger signal is used to trigger the camera device to shoot. The camera device includes a driving device and a camera. The driving device is used to drive the camera in the camera to move; the camera is used to move the camera. Take video shooting during the process; obtain the video taken by the camera under the trigger signal; extract the identification information corresponding to the person in the video, and save the corresponding relationship between the identification information and the video, through the video in this application The processing method achieves the purpose of pre-saving the video containing the user and the identification information of the person identified from the video, and querying the user's video from the pre-stored video library according to the user's identification information, to achieve The technical effect of improving the reliability of the video processing method is improved, the user experience is improved, and the technical problem of low reliability of the video processing method in related technologies is solved.
附图说明Description of the drawings
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:The drawings described here are used to provide a further understanding of the application and constitute a part of the application. The exemplary embodiments and descriptions of the application are used to explain the application, and do not constitute an improper limitation of the application. In the attached picture:
图1是根据本申请实施例一的视频处理方法的流程图;Fig. 1 is a flowchart of a video processing method according to Embodiment 1 of the present application;
图2是根据本申请实施例一的拍摄装置应用场景示意图;Fig. 2 is a schematic diagram of an application scenario of a photographing device according to the first embodiment of the present application;
图3是根据本申请实施例一的拍摄装置示意图;Fig. 3 is a schematic diagram of a photographing device according to the first embodiment of the present application;
图4是根据本申请实施例二的拍摄装置应用场景示意图一;Fig. 4 is a schematic diagram 1 of an application scenario of a photographing device according to a second embodiment of the present application;
图5是根据本申请实施例二的拍摄装置应用场景示意图二;FIG. 5 is a second schematic diagram of an application scenario of the photographing device according to the second embodiment of the present application;
图6是根据本申请实施例二的拍摄装置示意图;Fig. 6 is a schematic diagram of a photographing device according to a second embodiment of the present application;
图7是根据本申请实施例三的拍摄装置应用场景示意图一;Fig. 7 is a schematic diagram 1 of an application scenario of a photographing device according to a third embodiment of the present application;
图8是根据本申请实施例三的拍摄装置应用场景示意图二;FIG. 8 is a schematic diagram of a second application scenario of a photographing device according to the third embodiment of the present application;
图9是根据本申请实施例四的拍摄装置应用场景示意图一;Fig. 9 is a schematic diagram 1 of an application scenario of a photographing device according to a fourth embodiment of the present application;
图10是根据本申请实施例四的拍摄装置应用场景示意图二;10 is a schematic diagram of a second application scenario of a photographing device according to the fourth embodiment of the present application;
图11是根据本申请实施例四的拍摄装置示意图;Fig. 11 is a schematic diagram of a photographing device according to a fourth embodiment of the present application;
图12是根据本申请实施例五的拍摄装置应用场景示意图;FIG. 12 is a schematic diagram of an application scenario of a photographing device according to Embodiment 5 of the present application;
图13是根据本申请实施例六的拍摄装置应用场景示意图;Fig. 13 is a schematic diagram of an application scenario of a photographing device according to the sixth embodiment of the present application;
图14是根据本申请实施例六的拍摄装置示意图;Fig. 14 is a schematic diagram of a photographing device according to a sixth embodiment of the present application;
图15是根据本申请实施例七的拍摄装置应用场景示意图;FIG. 15 is a schematic diagram of an application scenario of a photographing device according to Embodiment 7 of the present application;
图16是根据本申请实施例七的拍摄装置示意图;Fig. 16 is a schematic diagram of a photographing device according to a seventh embodiment of the present application;
图17是根据本申请实施例八的拍摄装置应用场景示意图;FIG. 17 is a schematic diagram of an application scenario of a photographing device according to Embodiment 8 of the present application;
图18是根据本申请实施例八的拍摄装置示意图;Fig. 18 is a schematic diagram of a photographing device according to the eighth embodiment of the present application;
图19是根据本申请实施例九的拍摄装置应用场景示意图;Fig. 19 is a schematic diagram of an application scenario of a photographing device according to the ninth embodiment of the present application;
图20是根据本申请实施例九的拍摄装置示意图;Fig. 20 is a schematic diagram of a photographing device according to the ninth embodiment of the present application;
图21是根据本申请实施例十的拍摄装置应用场景示意图;FIG. 21 is a schematic diagram of an application scenario of a photographing device according to the tenth embodiment of the present application;
图22是根据本申请实施例十的拍摄装置示意图;Fig. 22 is a schematic diagram of a photographing device according to the tenth embodiment of the present application;
图23是根据本申请实施例十一的拍摄装置应用场景示意图;Fig. 23 is a schematic diagram of an application scenario of a photographing device according to the eleventh embodiment of the present application;
图24是根据本申请实施例十一的拍摄装置示意图;Fig. 24 is a schematic diagram of a photographing device according to the eleventh embodiment of the present application;
图25是根据本申请实施例十二的拍摄装置应用场景示意图;FIG. 25 is a schematic diagram of an application scenario of a photographing device according to the twelfth embodiment of the present application;
图26是根据本申请实施例十二的拍摄装置示意图;Fig. 26 is a schematic diagram of a photographing device according to the twelfth embodiment of the present application;
图27是根据本申请实施例十三的拍摄装置应用场景示意图;FIG. 27 is a schematic diagram of an application scenario of a photographing device according to the thirteenth embodiment of the present application;
图28是根据本申请实施例十三的拍摄装置示意图;Fig. 28 is a schematic diagram of a photographing device according to a thirteenth embodiment of the present application;
图29是根据本申请实施例十四的拍摄装置应用场景示意图;FIG. 29 is a schematic diagram of an application scenario of a photographing device according to the fourteenth embodiment of the present application;
图30是根据本申请实施例十四的拍摄装置示意图;Fig. 30 is a schematic diagram of a photographing device according to a fourteenth embodiment of the present application;
图31是根据本申请实施例十五的拍摄装置应用场景示意图;FIG. 31 is a schematic diagram of an application scenario of a photographing device according to the fifteenth embodiment of the present application;
图32是根据本申请实施例十五的拍摄装置示意图;Fig. 32 is a schematic diagram of a photographing device according to the fifteenth embodiment of the present application;
图33是根据本申请实施例十六的拍摄装置应用场景示意图;Fig. 33 is a schematic diagram of an application scenario of a photographing device according to the sixteenth embodiment of the present application;
图34是根据本申请实施例十六的拍摄装置示意图;Fig. 34 is a schematic diagram of a photographing device according to a sixteenth embodiment of the present application;
图35是根据本申请实施例十七的拍摄装置应用场景示意图一;Fig. 35 is a schematic diagram 1 of an application scenario of a photographing device according to the seventeenth embodiment of the present application;
图36是根据本申请实施例十七的拍摄装置应用场景示意图二;Fig. 36 is a schematic diagram 2 of an application scenario of a photographing device according to the seventeenth embodiment of the present application;
图37是根据本申请实施例十七的拍摄装置示意图;以及,Fig. 37 is a schematic diagram of a photographing device according to a seventeenth embodiment of the present application; and,
图38是根据本申请实施例十八的视频处理装的置示意图。Fig. 38 is a schematic diagram of a video processing device according to the eighteenth embodiment of the present application.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领 域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。In order to enable those skilled in the art to better understand the solutions of the application, the technical solutions in the embodiments of the application will be clearly and completely described below in conjunction with the drawings in the embodiments of the application. Obviously, the described embodiments are only These are a part of the embodiments of this application, but not all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "first" and "second" in the specification and claims of the application and the above-mentioned drawings are used to distinguish similar objects, and not necessarily used to describe a specific sequence or sequence. It should be understood that the data used in this way can be interchanged under appropriate circumstances, so that the embodiments of the present application described herein can be implemented in a sequence other than those illustrated or described herein. In addition, the terms "including" and "having" and any variations of them are intended to cover non-exclusive inclusions. For example, a process, method, system, product, or device that includes a series of steps or units is not necessarily limited to those clearly listed. Those steps or units may include other steps or units that are not clearly listed or are inherent to these processes, methods, products, or equipment.
实施例一Example one
根据本申请实施例,提供了一种视频处理方法的方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。According to an embodiment of the present application, a method embodiment of a video processing method is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings can be executed in a computer system such as a set of computer-executable instructions, and, Although a logical sequence is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than here.
图1是根据本申请实施例的视频处理方法的流程图,如图1所示,该视频处理方法包括如下步骤:Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application. As shown in Fig. 1, the video processing method includes the following steps:
步骤S102,接收触发信号,其中,触发信号用于触发摄像装置进行拍摄,摄像装置中包括驱动装置和摄像头,驱动装置用于驱动摄像装置中的摄像头移动;摄像头用于在移动的过程中进行视频拍摄。Step S102: Receive a trigger signal, where the trigger signal is used to trigger the camera device to shoot, the camera device includes a drive device and a camera, the drive device is used to drive the camera in the camera device to move; the camera is used to perform video during the movement Shooting.
可选的,驱动装置的驱动方式为以下至少之一:机械驱动,电磁驱动,压力驱动。其中,机械驱动,可以通过滚轮、拉绳、传送带、丝杆等来实现;电磁驱动可以通过直线电机、磁悬浮等来实现;压力驱动可以采用液压或气压等流体压力来实现,例如,水能,风能、液压泵功能、气泵功能等。Optionally, the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving. Among them, mechanical driving can be realized by rollers, ropes, conveyor belts, screw rods, etc.; electromagnetic driving can be realized by linear motors, magnetic levitation, etc.; pressure driving can be realized by fluid pressure such as hydraulic or pneumatic, for example, water energy, Wind energy, hydraulic pump function, air pump function, etc.
可选的,驱动装置驱动摄像装置中的摄像头移动时,可以安装一定移动轨迹来实现,其中,摄像头的移动轨迹可以包括以下至少之一:预定起点和预定终点之间的往返移动轨迹,预定路径的循环移动轨迹,基于预定编程程序设计的移动轨迹,跟随目标对象的跟踪移动轨迹。即,移动轨迹可以是在预定起点和终点之间作往返移动,可以是按预定路径循环移动,可以是执行预定编程程序移动,可以是跟随目标人物跟踪移动,可以是以上移动方式混合执行;其中,移动方式的混合执行,例如,对目标人物跟踪固定时长后执行预定程序,预定程序执行结束后再对目标人物进行跟踪固定时长。Optionally, when the driving device drives the camera in the camera device to move, it can be realized by installing a certain movement track, where the movement track of the camera can include at least one of the following: a reciprocating movement track between a predetermined starting point and a predetermined ending point, and a predetermined path The cyclic movement trajectory is based on the movement trajectory designed by a predetermined programming program and follows the tracking movement trajectory of the target object. That is, the movement trajectory can be a reciprocating movement between a predetermined start point and an end point, can be a cyclic movement according to a predetermined path, can be a movement that executes a predetermined programming program, can be a tracking movement following the target person, or a combination of the above movement modes; Mixed execution of movement modes, for example, a predetermined program is executed after the target person is tracked for a fixed period of time, and the target person is tracked for a fixed period of time after the execution of the predetermined program is completed.
可选的,摄像头的移动方式为以下至少之一:轨道式移动,旋转式移动。其中,轨道可以是滑轨,可以是伸缩轨、可以是绳索轨;在旋转式移动中,可以是按单关节旋转,可以是多关节旋转,旋转机构可以是摇臂,可以是转轮;另外,移动方式也可以是轨道式移动和旋转式移动共同混合移动。Optionally, the movement mode of the camera is at least one of the following: orbital movement and rotational movement. Among them, the track can be a sliding track, a telescopic track, or a rope track; in a rotary movement, it can be rotated by a single joint, or can be rotated by multiple joints, and the rotating mechanism can be a rocker arm or a runner; in addition, , The movement method can also be a mixed movement of orbital movement and rotary movement.
在一种可选的实施例中,在接收到触发信号之前,视频处理方法还可以包括:接收用户在软件界面上的操作;响应于操作,通过网络向摄像装置发出触发信号。In an optional embodiment, before receiving the trigger signal, the video processing method may further include: receiving a user's operation on the software interface; in response to the operation, sending a trigger signal to the camera device via the network.
一个方面,在接收用户在软件界面上的操作之前,视频处理方法还可以包括:通过扫描设置在摄像装置上的图形化码获取摄像装置的标识信息;根据标识信息在软件界面上显示能够对摄像装置所进行的操作。In one aspect, before receiving the user's operation on the software interface, the video processing method may further include: acquiring the identification information of the camera device by scanning a graphical code set on the camera device; The operation performed by the device.
另外一个方面,在接收用户在软件界面上的操作之前,视频处理方法还可以包括:在软件界面显示人的手持设备的情况下,获取手持设备的地理位置信息;根据地理位置信息在软件界面上显示距离手持设备预定范围内的人能够控制的摄像装置,以及能过对摄像装置所进行的操作。In another aspect, before receiving the user's operation on the software interface, the video processing method may further include: acquiring the geographic location information of the handheld device when the software interface displays the human handheld device; and displaying the geographic location information on the software interface based on the geographic location information Displays the camera devices that can be controlled by a person within a predetermined range from the handheld device, and the operations that can be performed on the camera device.
例如,网络获取到的由便携终端APP发出的包含有用户身份标识和确认使用该视频捕获装置的数据信息。确认使用该视频捕获装置的数据信息可以是用户在APP应用中输入该视频捕获装置的编码并确认使用产生的,可以是用户使用APP应用扫描视频捕获装置的二维码标识产生的,可以是便携终端APP根据便携终端当前的定位信息与视频捕获装置位置信息达到预定条件产生的。For example, the data information obtained by the network and sent by the portable terminal APP contains the user identification and confirms the use of the video capture device. The data information for confirming the use of the video capture device can be generated by the user entering the code of the video capture device in the APP application and confirming the use, it can be generated by the user using the APP application to scan the QR code of the video capture device, and it can be portable The terminal APP is generated based on the current location information of the portable terminal and the location information of the video capture device reaching a predetermined condition.
步骤S104,获取摄像头在触发信号的触发下拍摄的视频。Step S104: Acquire a video captured by the camera under the trigger of the trigger signal.
在一种可选的实施例中,在接收到触发信号之前,视频处理方法还可以包括:通过摄像装置上设置的传感器感应到有人出现在摄像装置的拍摄范围内;响应于传感器的感应,发出触发信号。In an optional embodiment, before receiving the trigger signal, the video processing method may further include: sensing that someone is present in the shooting range of the camera device through a sensor set on the camera device; Trigger signal.
可选的,上述传感器为以下至少之一:红外感应单元,射频感应单元,雷达探测单元。Optionally, the aforementioned sensor is at least one of the following: an infrared sensing unit, a radio frequency sensing unit, and a radar detection unit.
即,在本申请中,用于触发摄像装置的传感器也可以为:响应于人体红外线的红外感应单元,可以是被设定为响应于射频信号的射频感应单元,可以是响应于移动物体的雷达探测单元。其中,在本申请实施例中,射频感应单元可以为射频识别RFID卡片。That is, in this application, the sensor used to trigger the camera device may also be an infrared sensing unit that responds to infrared rays of the human body, a radio frequency sensing unit set to respond to radio frequency signals, or a radar responding to moving objects. Detection unit. Among them, in the embodiment of the present application, the radio frequency sensing unit may be a radio frequency identification RFID card.
在另外一中可选的实施例中,在接收到触发信号之前,视频处理方法还可以包括:接收用户对摄像装置中的开关的操作;响应于操作,发出触发信号。In another optional embodiment, before receiving the trigger signal, the video processing method may further include: receiving a user's operation of a switch in the camera device; and in response to the operation, sending a trigger signal.
可选的,开关为以下至少之一:按键开关单元,触摸开关单元,光电开关单元。Optionally, the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
即,在本申请中,用于触发摄像装置的传感器也可以为:可以是响应于人体通过的光电开关单元,可以是响应于人体按压的按键开关单元,可以是响应于人体触摸的触摸开关单元。That is, in the present application, the sensor used to trigger the camera device may also be: a photoelectric switch unit that responds to the passage of a human body, a key switch unit that responds to pressing by the human body, or a touch switch unit that responds to human touch. .
另外,用于触发摄像装置启动的也可以为响应于人物的手势信息、嘴形信息、体形信息的特征触发单元,可以是响应于网络信号的指令开关,例如,便携终端APP通过网络发出的指令信息,指令信息可以是在APP中的选择附近视频捕获装置操作产生,可以是在APP中输入视频捕获装置的编码或者是使用APP扫描装置二维码,可以是APP获取到终端的定位坐标位于移动式视频获取装置设定的区域内产生的。In addition, the feature trigger unit that is used to trigger the start of the camera device can also be a feature trigger unit that responds to the person's gesture information, mouth shape information, and body shape information. It can be an instruction switch that responds to a network signal, for example, an instruction issued by a portable terminal APP through the network. Information, instruction information can be generated by the operation of the nearby video capture device selected in the APP, it can be entered in the APP of the code of the video capture device or the QR code of the device is scanned by the APP, and it can be that the APP obtains the positioning coordinates of the terminal in the mobile The video capture device is generated within the area set by the video capture device.
步骤S106,提取视频中的人所对应的身份标识信息,并将身份标识信息与视频的对应关系进行保存。Step S106: Extract the identity information corresponding to the person in the video, and save the corresponding relationship between the identity information and the video.
可选的,提取视频中的人所对应的身份标识信息可以包括:从人中识别出人身上的附着物和/或人的生物特征;将附着物的特征信息和/或生物特征的特征信息作为用于标识该人的身份标识信息。Optionally, extracting the identification information corresponding to the person in the video may include: identifying attachments and/or biological characteristics of the person from the person; combining the characteristic information and/or biological characteristics of the attachment As the identification information used to identify the person.
另外,在本申请实施例中,获取的视频以及从该视频中识别出来的身份标识信息对应保存,支撑系统能够根据该对应保存进行对视频的对应提取操作,方便游客获取包含有自己的视频片段。In addition, in this embodiment of the application, the acquired video and the identification information identified from the video are saved correspondingly, and the support system can perform corresponding extraction operations on the video according to the corresponding saving, so that visitors can easily obtain video clips containing themselves. .
再者,获取的视频和从该视频中识别出来的身份识别信息对应保存,可以是视频文件映射于数据库中与身份识别标识对应保存,也可以在视频文件中包含有身份识别标识的摘要信息。Furthermore, the acquired video and the identification information identified from the video are stored correspondingly. The video file may be mapped to the database and stored corresponding to the identification identification, or the summary information of the identification identification may be included in the video file.
由上可知,在申请实施例中,可以接收触发信号,其中,触发信号用于触发摄像装置进行拍摄,摄像装置中包括驱动装置和摄像头,驱动装置用于驱动摄像装置中的摄像头移动;摄像头用于在移动的过程中进行视频拍摄;获取摄像头在触发信号的触发下拍摄的视频;提取视频中的人所对应的身份标识信息,并将身份标识信息与视频的对应关系进行保存,实现了预先将包含有用户的视频与从视频出识别的人物的身份标识信息进行对应保存的目的。It can be seen from the above that in the embodiments of the application, a trigger signal can be received, where the trigger signal is used to trigger the camera device to shoot. The camera device includes a driving device and a camera, and the driving device is used to drive the camera in the camera to move; For video shooting in the process of moving; to obtain the video captured by the camera under the trigger signal; to extract the identity information corresponding to the person in the video, and to save the corresponding relationship between the identity information and the video, realizing the advance The purpose of correspondingly saving the video containing the user and the identification information of the person identified from the video.
容易注意到,由于在拍摄得到视频后,对视频进行识别处理,得到视频中的人物的身份标识信息,并将视频与识别得到的身份标识信息进行对应保存,从而在用户请求获取自身的视频时,可以根据该用户的身份标识信息从存储有视频的视频库中调用与用户的身份标识信息一致的视频,实现了预先将包含有用户的视频与从视频出识别的人物的身份标识信息进行对应保存,并根据用户的身份标识信息从预先存储的视频 库中查询得到用户的视频的目的,达到了提高视频处理方式的可靠性的技术效果,提升了用户体验。It is easy to notice that after the video is captured, the video is identified to obtain the identification information of the person in the video, and the video and the identified identification information are stored correspondingly, so that when the user requests to obtain his own video According to the user’s identity information, a video that is consistent with the user’s identity information can be called from the video library where the video is stored, so that the video containing the user can be pre-matched with the identity information of the person identified from the video. The purpose of saving and obtaining the user's video from the pre-stored video library according to the user's identity identification information is achieved, which achieves the technical effect of improving the reliability of the video processing method and improves the user experience.
因此,通过本申请中的视频处理方法,解决了相关技术中的视频处理方式可靠性较低的技术问题。Therefore, the video processing method in the present application solves the technical problem of low reliability of the video processing method in the related art.
在一种可选的实施例中,在从视频中提取视频中的人所对应的身份标识信息,并将身份标识信息与视频的对应关系进行保存之后,视频处理方法还包括:获取待提取视频的身份识别信息,并在已经保存的视频中查找与待提取视频的身份识别信息所对应的一个或多个视频;将一个或多个视频显示给待提取视频的身份识别信息所对应的用户。In an optional embodiment, after extracting the identification information corresponding to the person in the video from the video, and saving the corresponding relationship between the identification information and the video, the video processing method further includes: obtaining the video to be extracted And search for one or more videos corresponding to the identification information of the video to be extracted in the saved videos; display one or more videos to the user corresponding to the identification information of the video to be extracted.
可选的,获取待提取视频的身份识别信息可以包括:获取人的附着物的特征信息和/或人的生物特征的身份标识信息,将附着物的特征信息和/或生物特征对应的特征信息确定为人的身份标识信息;其中,附着物包括以下至少之一:服装、饰品、手持物品;生物特征包括以下至少之一:面部特征,体态特征;附着物用于在预定区域唯一标识人。Optionally, acquiring the identification information of the video to be extracted may include: acquiring the characteristic information of the attachment of a person and/or the identification information of the biological characteristic of the person, and combining the characteristic information of the attachment and/or the characteristic information corresponding to the biological characteristic Identification information determined as a person; wherein the attachment includes at least one of the following: clothing, accessories, and hand-held objects; the biological characteristics include at least one of the following: facial features, posture characteristics; the attachment is used to uniquely identify a person in a predetermined area.
另外,在视频中的人的数量为多个的情况下,提取视频中的人所对应的身份标识信息可以包括:从多个人中识别出每一个人身上的附着物和/或生物特征;将每一个人身上的附着物的特征信息和/或生物特征对应的特征信息确定为多个人中每一个人的身份标识信息,并将多个人中每一个人的身份识别信息与视频的对应关系进行保存。In addition, when the number of people in the video is multiple, extracting the identification information corresponding to the people in the video may include: identifying attachments and/or biological characteristics of each person from multiple people; The characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics are determined as the identification information of each of the multiple people, and the corresponding relationship between the identification information of each of the multiple people and the video is determined save.
在一种可选的实施例中,将多个人中每一个人的身份识别信息与视频的对应关系进行保存包括:确定多个人中每一个人的身份识别信息在视频中被识别出的时间节点;将时间节点作为多个人中每一个人的身份识别信息的时间标签;将多个人中每一个人的身份识别信息以及为多个人中每一个人的身份识别信息添加的时间标签与视频的对应关系进行保存。In an optional embodiment, storing the corresponding relationship between the identification information of each of the multiple people and the video includes: determining the time node at which the identification information of each of the multiple people is recognized in the video ; The time node is used as the time label of the identification information of each of the multiple people; the identification information of each of the multiple people and the time label added for the identification information of each of the multiple people are corresponding to the video The relationship is saved.
在另外一种可选的实施例中,从视频中提取视频中的人所对应的身份标识信息包括:从射频信号中提取感应到的射频标识信息;将射频标识信息作为用于标识该人的身份标识信息;从网络触发信号中提取得到标识该人的身份标识信息;获取待提取视频的身份识别信息包括:获取射频信号中提取感应到的射频标识信息,将射频标识信息确定为人的身份标识信息;从网络触发信号中提取得到标识该人的身份标识信息。In another optional embodiment, extracting the identification information corresponding to the person in the video from the video includes: extracting the sensed radio frequency identification information from the radio frequency signal; using the radio frequency identification information as the identification information for the person Identification information; the identification information that identifies the person is extracted from the network trigger signal; the identification information of the video to be extracted includes: extracting the sensed radio frequency identification information from the radio frequency signal, and determining the radio frequency identification information as the identity of the person Information; the identification information that identifies the person is extracted from the network trigger signal.
例如,从视频中提取视频中的人所对应的身份标识信息可以通过身份识别单元来实现,身份识别单元可以是获取人物携带的射频识别单元,例如RFID卡片。For example, extracting the identification information corresponding to the person in the video from the video can be implemented by an identification unit, which can be a radio frequency identification unit carried by a person, such as an RFID card.
又例如,从视频中提取视频中的人所对应的身份标识信息可以通过身份识别单元 来实现,身份识别单元可以是网络获取标识信息,例如,网络获取到的由便携终端APP发出的包含有用户身份标识和确认使用该视频捕获装置的数据信息。确认使用该视频捕获装置的数据信息可以是用户在APP应用中输入该视频捕获装置的编码并确认使用产生的,可以是用户使用APP应用扫描视频捕获装置的二维码标识产生的,可以是便携终端APP根据便携终端当前的定位信息与视频捕获装置位置信息达到预定条件产生的。For another example, extracting the identification information corresponding to the person in the video from the video can be realized by the identification unit, which can be the identification information obtained by the network, for example, the network obtained by the portable terminal APP issued by the user contains the user Identification and confirmation of the use of the video capture device data information. The data information for confirming the use of the video capture device can be generated by the user entering the code of the video capture device in the APP application and confirming the use, it can be generated by the user using the APP application to scan the QR code of the video capture device, and it can be portable The terminal APP is generated based on the current location information of the portable terminal and the location information of the video capture device reaching a predetermined condition.
图2是根据本申请实施例一的拍摄装置应用场景示意图,图3是根据本申请实施例一的拍摄装置示意图,如图2和图3所示,在本实施例中,采用凸型轨道往复运动,驱动方式为采用滚轮驱动,运动轨迹为直线往返,本实施可以应用在室内展馆,采用的触发方式为人体红外感应开关,其摄像装置(包括轨道和视频捕获车)的轨道为凸型轨道。Figure 2 is a schematic diagram of the application scenario of the camera according to the first embodiment of the application, and Figure 3 is a schematic diagram of the camera according to the first embodiment of the application, as shown in Figures 2 and 3, in this embodiment, a convex track is used to reciprocate Movement, the driving method is roller drive, and the movement track is straight back and forth. This implementation can be applied to indoor exhibition halls. The trigger method adopted is the human body infrared sensor switch, and the track of the camera device (including the track and the video capture car) is convex. track.
在本实施例中,视频捕获跑车两侧有接近开关,在轨道两端有限位铁块,当跑车移动至限位铁块处时接近开关感应到限位铁块,停止当前方向的移动,并启动相反方向移动。移动至另一端限位铁块处,响应于接近开关感应信号,停止当前方向的移动,并启动相反方向移动。视频捕获跑车在两限位铁块之间不间断往返直线运动。视频捕获跑车响应于人体红外感应开关感应到的信号,在感应到人体红外的情况下,启动所述不间断往返直线运动,并执行获取视频的操作;在感应不到人体红外的情况下,停止所述不间断往返直线运动。In this embodiment, there are proximity switches on both sides of the video capture sports car, and there are limit iron blocks at both ends of the track. When the sports car moves to the limit iron block, the proximity switch senses the limit iron block and stops the movement in the current direction. Start moving in the opposite direction. Move to the limit iron block at the other end, in response to the proximity switch sensing signal, stop the movement in the current direction, and start the movement in the opposite direction. The video captures the uninterrupted linear motion of the sports car back and forth between the two limit iron blocks. The video capture sports car responds to the signal sensed by the infrared sensor switch of the human body, and starts the uninterrupted round-trip linear motion when the human body infrared is sensed, and executes the operation of acquiring the video; if the human body infrared is not sensed, it stops The uninterrupted reciprocating linear movement.
视频捕获车(或者称为跑车)上安装有跑轮,并提供无线连接。跑车两端设置有接近开关,跑车前后设置有取电碳刷;轨道两端设置有可被接近开关感应到的限位铁柱,角钢轨道两侧设置有导电排线为跑车提供电力。The video capture vehicle (or called a sports car) is equipped with running wheels and provides wireless connection. Proximity switches are arranged at both ends of the sports car, and carbon brushes are arranged at the front and rear of the sports car; the two ends of the track are arranged with limit iron posts that can be sensed by the proximity switch, and conductive cables are arranged on both sides of the angle steel track to provide electricity for the sports car.
在实施例中可以通过如下方式生成视频片段:从连续捕获的视频流中提取视频片段,以感应到RFID射频信号的时间点为视频片段第一帧,以感应到RFID射频信号消失的时间点为视频片段的结束帧。感应到的RFID射频标识保存于提取的视频文件摘要中。RFID射频卡通过人体红外感应开关触发跑车开启直线往返移动并获取视频操作,可以避免装置进行无效作业,随时机动的对游客进行移动式视频捕获。在展馆展台后方,视频捕获跑车在角道轨上往复运动,连续拍摄游客与展品的同框视频。In the embodiment, the video segment can be generated in the following way: extract the video segment from the continuously captured video stream, take the time point when the RFID radio frequency signal is sensed as the first frame of the video segment, and take the time point when the RFID radio frequency signal disappears as sensed The end frame of the video clip. The sensed RFID radio frequency identification is saved in the extracted video file abstract. The RFID radio frequency card triggers the sports car to start linear reciprocating movement through the human body infrared sensor switch and obtain the video operation, which can avoid the device from performing invalid operations, and can automatically capture the tourists' mobile video at any time. Behind the booth of the exhibition hall, the video captures the sports car reciprocating on the corner track, continuously shooting the same frame video of the visitors and the exhibits.
实施例二Example two
图4是根据本申请实施例二的拍摄装置应用场景示意图一,图5是根据本申请实施例二的拍摄装置应用场景示意图二,图6是根据本申请实施例二的拍摄装置示意图,如图4、图5和图6所示,在本实施例中,采用了凹型循环轨道,其驱动方式为同步 带轮,其运动轨迹为曲线循环-人物跟踪,本实施可以应用在滑梯中,采用的触发方式为面部识别,移动形式为凹型轨道。4 is a schematic diagram of the first application scenario of a photographing device according to the second embodiment of the application, FIG. 5 is a schematic diagram two of the application scenario of the photographing device according to the second embodiment of the present application, and FIG. 6 is a schematic diagram of the photographing device according to the second embodiment of the present application, as shown in FIG. 4. As shown in Figures 5 and 6, in this embodiment, a concave circular track is used, its driving mode is a timing belt wheel, and its motion track is a curve cycle-character tracking. This implementation can be applied to a slide. The trigger mode is facial recognition, and the movement form is a concave track.
在本实施例中,触发后启动一辆视频捕获跑车,由起点跟踪目标人物移动至终点,到达终点后,与其它视频捕获跑车排序等侯伺服。视频捕获车(或称为跑车)使用了直流微型齿轮减速电机、同步轮、齿形同步带。视频捕获车可以采用无线连接。In this embodiment, after triggering, a video capture sports car is started, and the target person is tracked from the starting point to the end point, and after reaching the end point, it queues with other video capture sports cars to wait for the servo. The video capture car (or called a sports car) uses a DC miniature gear reducer motor, a synchronous wheel, and a toothed synchronous belt. The video capture car can be connected wirelessly.
在本实施例中,视频片段从视频捕获跑车启动移动开始生成第一帧,视频捕获跑车移动至轨道最底端生成结束帧。视频片段中不包含轨道最顶端移动至终点处的捕获信息。生成的视频文件文件名映射于数据库中,数据库中所述视频文件与面部特征识别标识对应。在本实施例使用的是面部特征识别,可以通过识别已登记面部特征的游客触发拍摄移动车开启执行移动获取视频操作,可以自主跟踪拍摄,实现常规拍摄手段无法达到的呈现效果。In this embodiment, the first frame of the video clip starts when the video capture sports car starts to move, and the video capture sports car moves to the bottom end of the track to generate the end frame. The video clip does not contain the capture information from the top of the track to the end. The file name of the generated video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier. In this embodiment, facial feature recognition is used. By recognizing tourists with registered facial features, the camera can be triggered to start the mobile vehicle to perform the mobile acquisition video operation, which can autonomously track and shoot, and achieve a presentation effect that cannot be achieved by conventional shooting methods.
在目标人物从滑梯下滑时,第一视频捕获跑车自动跟踪拍摄。第二视频捕获跑车处于预备位置。在另一侧的轨道内有四个视频捕获跑车排序等候伺服。视频捕获跑车在凹型轨道内,跑车通过两个取电碳刷供力,跑车设置有直流微型齿轮减速电机,减速电机输出轴上的同步轮与轨道内粘贴的齿形同步带啮合。轨道一侧壁粘贴有两条导电排线,通过跑车的取电碳刷为视频捕获跑车提供电力。When the target person descends from the slide, the first video captures the sports car to automatically track and shoot. The second video captures the sports car in a ready position. On the other side of the track, there are four video-capture sports cars waiting for the servo. The video captures the sports car in a concave track. The sports car is powered by two electric carbon brushes. The sports car is equipped with a DC miniature gear reducer motor. The synchronous wheel on the output shaft of the reducer meshes with the toothed synchronous belt pasted in the track. Two conductive cables are pasted on one side wall of the track to provide electricity for the video capture sports car through the carbon brush of the sports car.
实施例三Example three
图7是根据本申请实施例三的拍摄装置应用场景示意图一,图8是根据本申请实施例三的拍摄装置应用场景示意图二,如图7和图8所示,在本实施例中多个视频捕获跑车在T型秀台表演厅的吊顶上沿着蛇形夹缝轨道曲线循环移动连线拍摄,本实施例采用的是夹缝型蛇轨,驱动方式是滚轮,运动轨迹为曲线循环,本实施例可以应用在T型秀台。在本实施例中采用了总控制开关来进行触发,轨道可以采用吊顶夹缝轨道。FIG. 7 is a schematic diagram of the first application scenario of the photographing device according to the third embodiment of the present application, and FIG. 8 is a schematic diagram of the second application scenario of the photographing device according to the third embodiment of the present application, as shown in FIG. 7 and FIG. The video captures the sports car on the ceiling of the T-shaped show stage performance hall, which moves circularly along the curve of the snake-shaped slit track. In this embodiment, the slit-shaped snake track is adopted. The driving mode is a roller, and the movement track is a curve loop. The example can be applied to the T-shaped runway. In this embodiment, a master control switch is used for triggering, and the track can be a ceiling cracked track.
在本实施例中,总控制开关闭合后,视频捕获跑车沿轨道曲线循环移动;总控制开关断开后,视频捕获跑车停止移动。In this embodiment, after the main control switch is closed, the video capture sports car cyclically moves along the track curve; after the main control switch is opened, the video capture sports car stops moving.
本实施例的识别方式为面部特征识别,从连续捕获的视频流中提取视频片段,以识别到面部特征的时间点之前的固定时长的时间点为视频片段第一帧,以第一帧时间点之后固定时长的时间点为结束帧。对所述摄取视频片段进行面部表情丰富变化程度分析,符合表情丰富变化程度判定的视频片段与面部特征识别标识对应保存。提取的视频文件文件名映射于数据库中,数据库中所述视频文件与面部特征识别标识对应。开启该移动视频获取装置后,不间断捕获视频,从连续视频中提取识别出人物面部特 征的时间点前后固定时长的视频,记录下视频中人物表情变化丰富的精彩瞬间。The recognition method in this embodiment is facial feature recognition. The video segment is extracted from the continuously captured video stream, and a fixed time point before the time point when the facial feature is recognized is the first frame of the video segment, and the first frame time point is The fixed time point afterwards is the end frame. The ingested video fragment is analyzed for the degree of facial expression richness change, and the video fragment that meets the determination of the degree of facial expression richness change is stored corresponding to the facial feature recognition identifier. The file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier. After the mobile video acquisition device is turned on, the video is captured continuously, and fixed-length videos before and after the time point when the facial features of the person are recognized are extracted from the continuous video, and the wonderful moments in the video with rich changes in the expression of the person are recorded.
实施例四Example four
图9是根据本申请实施例四的拍摄装置应用场景示意图一,图10是根据本申请实施例四的拍摄装置应用场景示意图二,图11是根据本申请实施例四的拍摄装置示意图,如图9、图10和图11所示,在本实施例中视频捕获装置在非工作状态下位于六角柱的最下端,目标人物站立于升降平台上,视频捕获装置被拉绳拖拽至六角柱的最上端。活动架设置于六角柱外围,带动视频捕获装置移动。本实施例采用了杆型八面柱、拉绳、上下旋转往返的方式,本实施例可以应用在城市公园,本实施例的触发方式为重力下压,移动方式为立杆轨道,驱动方式为活动架。FIG. 9 is a schematic diagram of the first application scenario of a photographing device according to the fourth embodiment of the application, FIG. 10 is a schematic diagram of the second application scenario of the photographing device according to the fourth embodiment of the present application, and FIG. 11 is a schematic diagram of the photographing device according to the fourth embodiment of the present application, as shown in FIG. 9. As shown in Figures 10 and 11, in this embodiment, the video capture device is located at the lowest end of the hexagonal column in the non-working state, the target person stands on the lifting platform, and the video capture device is dragged to the corner of the hexagonal column by a drawstring. Top. The movable frame is arranged on the periphery of the hexagonal column to drive the video capture device to move. This embodiment uses a rod-shaped octahedral column, a pull rope, and a way of rotating up and down. This embodiment can be applied to urban parks. The trigger mode of this embodiment is gravity downward pressure, the movement mode is a vertical pole track, and the driving mode is Activities frame.
在本实施例中,升降平台与视频捕获活动架之间的拉绳带动着视频捕获活动架旋转移动,例如,升降平台被重力下压降落,则视频捕获活动架执行旋转向上移动,例如升降平台取消了重力下压上升,则视频捕获活动架执行旋转向下移动。In this embodiment, the pull rope between the lifting platform and the video capture movable frame drives the video capture movable frame to rotate and move. For example, if the lifting platform is pressed and dropped by gravity, the video capture movable frame performs rotation and upward movement, for example, the lifting platform Cancel the gravity downward pressure rise, then the video capture activity frame will perform a rotating downward movement.
在本实施例中可以采用外部网络指令标识,例如,APP注册用户对移动式视频获取装置扫码或输入装置编号(APP注册标识)。In this embodiment, an external network instruction identifier can be used, for example, an APP registered user scans the code of the mobile video acquisition device or enters the device number (APP registration identifier).
生成视频的方式可以如下:立杆上设置有的接近开关,响应于视频捕获活动架处于最低位置。视频捕获活动架离开最低位置,响应于接近开关的信号,开始捕获视频并生成视频片段第一帧;视频捕获活动架回到最低位置,响应于接近开关的信号,生成视频片段的结束帧,并停止视频捕获。The way of generating the video can be as follows: a proximity switch is provided on the pole, and the movable frame is in the lowest position in response to the video capture. The video capture frame leaves the lowest position, and responds to the signal from the proximity switch to start capturing video and generates the first frame of the video clip; the video capture frame returns to the lowest position and responds to the signal from the proximity switch to generate the end frame of the video clip, and Stop video capture.
对所述捕获视频片段与网络触发标识匹配,匹配有网络触发标识的视频片段与所述标识对应保存。接收到的网络触发标识保存于生成的视频文件摘要中。Match the captured video clip with the network trigger mark, and save the video clip matching the network trigger mark corresponding to the mark. The received network trigger identifier is stored in the generated video file summary.
通过便携终端APP扫码确认移动式获取视频装置,使用该装置执行移动获取视频操作,可以完成拍摄视角由包含有用户的地平视角逐步上升到俯览景区全貌的高空视角,实现常规拍摄手段无法达到的呈现效果。Use the portable terminal APP to scan the code to confirm the mobile video acquisition device, and use this device to perform mobile acquisition video operations. The shooting angle of view can be gradually increased from the horizontal angle of view containing the user to the high-altitude angle of view of the panoramic view of the scenic area, which cannot be achieved by conventional shooting methods. The rendering effect.
实施例五Example five
图12是根据本申请实施例五的拍摄装置应用场景示意图,如图12所示,本实施例可以应用于登山阶梯,在登山阶梯一侧,设置有绳索轨道。拉绳器拖拽视频捕获装置沿绳索轨道滑动。本实施例可以称为绳索型,在本实施例中采用了拉绳器作为驱动,采用的轨迹为固定往返,触发方式为RFID射频卡,移动形式为绳索轨道。FIG. 12 is a schematic diagram of an application scenario of a photographing device according to Embodiment 5 of the present application. As shown in FIG. 12, this embodiment can be applied to a climbing ladder, and a rope track is provided on one side of the climbing ladder. The rope puller drags the video capture device to slide along the rope track. This embodiment can be referred to as a rope type. In this embodiment, a rope puller is used as a drive, a fixed trajectory is adopted, the trigger mode is an RFID radio frequency card, and the movement form is a rope track.
绳索轨道两端和移动式视频获取装置被设置有RFID读取器,可以读取10m以内符合ISO/IEC18000-6标准860-960MHz空中接口参数的无源RFID射频卡。Both ends of the rope track and the mobile video acquisition device are equipped with RFID readers, which can read passive RFID radio frequency cards that meet the ISO/IEC18000-6 standard 860-960MHz air interface parameters within 10m.
视频捕获单元连接于拉绳,绳索轨道顶端设置有拉绳器。The video capture unit is connected to the pull rope, and a rope puller is arranged at the top of the rope track.
拉绳器响应于RFID读取器接收到的射频信号标识,拖拽移动式视频获取装置在绳索轨道两端执行往返移动,响应于RFID读取器接收不到射频信号的时间点固定时长后拉绳器停止拖拽移动式视频获取装置。In response to the radio frequency signal identification received by the RFID reader, the rope puller drags the mobile video acquisition device to perform reciprocating movement at both ends of the rope track, and pulls it for a fixed period of time in response to the time point when the RFID reader cannot receive the radio frequency signal. The rope device stops dragging the mobile video capture device.
响应于设定于移动式视频获取装置上的RFID读取器接收到的射频信息标识,拉绳器降低拖拽速度;响应于设定于移动式视频获取装置上的RFID读取器接收不到射频信息标识,拉绳器恢复拖拽速度。In response to the radio frequency information identification received by the RFID reader set on the mobile video capture device, the rope puller reduces the drag speed; in response to the RFID reader set on the mobile video capture device cannot receive Radio frequency information identification, the rope puller resumes towing speed.
本实施例采用了RFID射频卡作为身份识别。响应于接收到RFID射频信息,执行视频捕获。响应于接收不到RFID射频信息延后固定时长结束执行视频捕获,并生成视频文件。生成的视频文件文件名映射于数据库中,数据库中所述视频文件与多个包含时间标签的RFID射频信息对应。In this embodiment, an RFID radio frequency card is used as identity recognition. In response to receiving the RFID radio frequency information, video capture is performed. In response to not receiving the RFID radio frequency information, the video capture is executed after a fixed period of time and the video file is generated. The file name of the generated video file is mapped in a database, and the video file in the database corresponds to a plurality of RFID radio frequency information containing time tags.
所述包含时间标签的RFID射频信息记录了感应到的RFID射频信息以及对应的时间点信息,所述时间点信息是接收到所述射频信息的时间点信息,所述时间点信息是接收不到所述射频信息的时间点信息。The RFID radio frequency information including the time tag records the sensed RFID radio frequency information and the corresponding time point information, the time point information is the time point information when the radio frequency information is received, and the time point information is not received Time point information of the radio frequency information.
通过超高频段RFID卡,移动式获取视频装置可以对10m以内的用户进行视频捕获,并对移动至用户附近时降低移动速度的方法,增强针对用户拍摄的呈现效果。Through the ultra-high frequency band RFID card, the mobile video capture device can capture the video of users within 10m, and reduce the speed of movement when moving to the vicinity of the user to enhance the presentation effect of shooting for the user.
实施例六Example Six
图13是根据本申请实施例六的拍摄装置应用场景示意图,图14是根据本申请实施例六的拍摄装置示意图,如图13和图14所示,本实施例可以应用在蹦床运动中,在本实施例中视频捕获装置跟踪蹦床上的目标人物上下移动拍摄。视频捕获装置两侧设置有直线电机感应线圈,分别贯穿于两根管式磁轴上。FIG. 13 is a schematic diagram of an application scenario of a photographing device according to Embodiment 6 of the present application, and FIG. 14 is a schematic diagram of a photographing device according to Embodiment 6 of the present application. As shown in FIG. 13 and FIG. 14, this embodiment can be applied to trampoline sports. In this embodiment, the video capture device tracks the target person on the trampoline moving up and down to shoot. Linear motor induction coils are arranged on both sides of the video capture device, which respectively penetrate two tubular magnetic shafts.
本实施例可以称为直线电机型,其驱动方式为管式高速伺服直线电机,其轨迹为直线往返。移动形式可以为管式轨道,管式高速伺服直线电机响应于后台服务器的视频图像人像分析,人物进入目标区域内后磁轴式直线电机移动至人物头部高度对应位置,响应于视频图像人像分析跟踪人物头部高度跟踪移动。This embodiment can be called a linear motor type, and its driving mode is a tubular high-speed servo linear motor, and its trajectory is linear reciprocating. The movement form can be a tubular track. The tubular high-speed servo linear motor responds to the video image portrait analysis of the background server. After the character enters the target area, the magnetic axis linear motor moves to the position corresponding to the height of the character's head, and responds to the video image portrait analysis Track the height of the person's head to track the movement.
采用的触发方式可以是外部网络指令方式,例如,便携终端APP检测到定位坐标位于预定区域内并向预定区域内移动式视频获取装置端发送触发指令。便携终端定位坐标位于移动式视频获取装置设置区域内则可以获取到APP注册标识,该标识作为身份识别标识。从连续捕获的视频流中提取视频片段,以视频图像分析中人物进入目标区域的时间点延后固定时长的时间点为视频片段第一帧,以视频图像分析中人物离开 目标区域的时间点前固定时长的时间点为视频片段的结束帧。The trigger method used may be an external network instruction method. For example, the portable terminal APP detects that the positioning coordinates are located in a predetermined area and sends a trigger instruction to the mobile video acquisition device in the predetermined area. If the positioning coordinates of the portable terminal are located in the setting area of the mobile video acquisition device, the APP registration identifier can be acquired, which is used as the identification identifier. Extract video clips from the continuously captured video stream, take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point The fixed time point is the end frame of the video clip.
对所述提取的视频片段进行慢镜头效果处理,对所述提取的视频片段与面部特征识别标识对应保存。生成的视频文件文件名映射于数据库中,数据库中所述视频文件与网络触发标识对应。Slow motion effect processing is performed on the extracted video segment, and the extracted video segment is saved corresponding to the facial feature recognition identifier. The file name of the generated video file is mapped in the database, and the video file in the database corresponds to the network trigger identifier.
通过高速伺服直线电机带动高速拍摄装置,响应于人物的移动状态进行跟踪移动,通过慢镜头效果实现常规拍摄无法呈现的效果。The high-speed servo linear motor drives the high-speed camera to track the movement in response to the moving state of the person, and achieve the effect that cannot be presented in conventional shooting through the slow-motion effect.
在以下实施例中,实施例七到十,以及实施例十三到十七均采用的是面部识别作为身份识别方式,实施例十一采用的是RFID射频卡作为身份识别方式,实施例十二采用的是胸牌图像特征识别作为身份识别方式。In the following embodiments, the seventh to tenth embodiments and the thirteenth to seventeenth all use facial recognition as the identity recognition method, and the eleventh embodiment uses an RFID radio frequency card as the identity recognition method, and the twelfth embodiment The image feature recognition of the name badge is used as the identification method.
实施例七Example Seven
图15是根据本申请实施例七的拍摄装置应用场景示意图,图16是根据本申请实施例七的拍摄装置示意图,如图15和图16所示,本实施例可以称为磁悬浮轨道,可以应用于科技馆,在本实施例中通过目标人物通过触摸点击触屏,触发磁悬浮视频捕获跑车直线往复移动拍摄。磁悬浮视频捕获跑车下部设置有两个滑板,滑板内置有磁铁,使视频捕获跑车悬浮于轨道上。在本实施例中的驱动方式为磁铁和感应线圈,轨迹为直线往返,触发方式为触摸开关,移动形式为磁悬浮轨道。磁悬浮视频捕获跑车两侧滑板设置有磁铁,轨道两侧设置与跑车磁铁磁性相同的磁铁,磁悬浮视频捕获跑车悬浮于轨道上方。磁悬浮视频捕获跑车中间设置有水平磁性柱,轨道中间设置有吸引水平磁性柱的水平线圈,轨道中间水平线圈驱动跑车水平磁性柱运动。15 is a schematic diagram of the application scenario of the photographing device according to the seventh embodiment of the present application. FIG. 16 is a schematic diagram of the photographing device according to the seventh embodiment of the present application. As shown in FIG. 15 and FIG. 16, this embodiment can be called a magnetic levitation track, which can be applied In the Science and Technology Museum, in this embodiment, the target person touches and clicks on the touch screen to trigger the magnetic levitation video capture of the sports car to move back and forth in a straight line. The lower part of the magnetic levitation video capture sports car is provided with two skateboards, and the skateboard has a built-in magnet to make the video capture sports car levitate on the track. In this embodiment, the driving mode is a magnet and an induction coil, the trajectory is a straight back and forth, the trigger mode is a touch switch, and the movement mode is a magnetic levitation track. The magnetic levitation video capture sports car is equipped with magnets on both sides of the skateboard, and the magnets with the same magnetism as the sports car magnet are arranged on both sides of the track, and the magnetic levitation video captures the sports car floating above the track. A horizontal magnetic column is arranged in the middle of the magnetic levitation video capture sports car, and a horizontal coil is arranged in the middle of the track to attract the horizontal magnetic column. The horizontal coil in the middle of the track drives the horizontal magnetic column of the sports car to move.
磁悬浮视频捕获跑车按照预定编程轨迹在轨道上执行往返移动。响应于触摸开关信号,启动固定次数的直线往返移动,并执行获取视频的操作。从连续捕获的视频流中提取视频片段,以视频图像分析中人物进入目标区域的时间点延后固定时长的时间点为视频片段第一帧,以视频图像分析中人物离开目标区域的时间点前固定时长的时间点为视频片段的结束帧。The magnetic levitation video captures the sports car to perform reciprocating movement on the track according to a predetermined programmed trajectory. In response to the touch switch signal, a fixed number of linear reciprocating movements are initiated, and the operation of acquiring the video is performed. Extract video clips from the continuously captured video stream, take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point The fixed time point is the end frame of the video clip.
对所述提取的视频片段与面部特征识别标识对应保存。生成的视频文件文件名映射于数据库中,数据库中所述视频文件与面部特征识别标识对应。The extracted video clips are saved corresponding to the facial feature recognition identifiers. The file name of the generated video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
通过人体红外感应开关触发跑车开启直线往返移动并获取视频操作,可以避免装置进行无效作业,随时机动的对游客进行移动式视频捕获。The human body infrared sensor switch triggers the sports car to start linear reciprocating movement and obtain video operations, which can prevent the device from performing invalid operations and automatically capture tourists on mobile at any time.
实施例八Example eight
图17是根据本申请实施例八的拍摄装置应用场景示意图,图18是根据本申请实 施例八的拍摄装置示意图,如图17和图18所示,本实施例可以称为升降滑台方式,本实施例可以应用在极限运动中,在极限运动场所,升降丝杠滑台跟踪目标人物进行移动拍摄。伺服电机带动丝杠转动,驱动视频捕获装置上下移动。本实施例的驱动方式为丝杠和滑台,其轨迹为跟踪轨迹,其触发方式为人像分析,其移动形式为直线道轨。FIG. 17 is a schematic diagram of an application scenario of a photographing device according to Embodiment 8 of the present application, and FIG. 18 is a schematic diagram of a photographing device according to Embodiment 8 of the present application. As shown in FIG. 17 and FIG. This embodiment can be applied to extreme sports. In extreme sports venues, the lifting screw sliding table tracks the target person for moving and shooting. The servo motor drives the lead screw to rotate and drives the video capture device to move up and down. The driving mode of this embodiment is a lead screw and a sliding table, its trajectory is a tracking trajectory, its trigger mode is a portrait analysis, and its movement form is a linear track.
丝杠滑台响应于后台服务器的视频图像人像分析,人物进入目标区域内后滑台移动至人物头部高度对应位置,响应于视频图像人像分析跟踪人物头部高度跟踪移动。从连续捕获的视频流中提取视频片段,以视频图像分析中人物进入目标区域的时间点延后固定时长的时间点为视频片段第一帧,以视频图像分析中人物离开目标区域的时间点前固定时长的时间点为视频片段的结束帧。对所述提取的视频片段与面部特征识别标识对应保存。提取的视频文件文件名映射于数据库中,数据库中所述视频文件与面部特征识别标识对应。通过响应于人物的移动状态进行跟踪移动,实现记录运动过程,实现常规拍摄无法呈现的效果。The lead screw sliding table responds to the video image portrait analysis of the back-end server, the sliding table moves to the position corresponding to the height of the person's head after the person enters the target area, and tracks the height of the person's head in response to the video image portrait analysis. Extract video clips from the continuously captured video stream, take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point The fixed time point is the end frame of the video clip. The extracted video clips are saved corresponding to the facial feature recognition identifiers. The file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier. By tracking the movement in response to the moving state of the character, the movement process is recorded, and the effect that cannot be presented by conventional shooting is realized.
实施例九Example 9
图19是根据本申请实施例九的拍摄装置应用场景示意图,图20是根据本申请实施例九的拍摄装置示意图,如图19和图20所示,本实施例可以称为吊轮方式的实施例,本实施例可以应用在景区通道上,路灯杆上部安装的移动视频捕获装置上下移动拍摄目标人物。路灯杆上部安装有球形雷达感应装置、拉绳器、吊轮式视频捕获装置。本实施例的驱动方式为拉绳器,轨迹为升降往返,触发方式为雷达感应开关移动形式为吊绳往返,在本实施例中雷达感应开关检测到活动物体后,视频捕获装置沿吊绳上下移动;雷达感应开关检测不到活动物体时,视频捕获装置停止移动。FIG. 19 is a schematic diagram of an application scenario of a photographing device according to Embodiment 9 of the present application, and FIG. 20 is a schematic diagram of a photographing device according to Embodiment 9 of the present application. As shown in FIG. 19 and FIG. 20, this embodiment can be referred to as the implementation of the hanging wheel mode. For example, this embodiment can be applied to a passageway in a scenic spot, where a mobile video capture device installed on the upper part of a street light pole moves up and down to shoot a target person. The upper part of the street light pole is equipped with a spherical radar sensor device, a rope puller, and a hanging wheel video capture device. The driving mode of this embodiment is a rope puller, the trajectory is up and down, and the trigger mode is that the radar sensor switch moves in the form of a sling reciprocation. In this embodiment, after the radar sensor switch detects a moving object, the video capture device moves up and down along the sling Moving; when the radar sensor switch detects no moving objects, the video capture device stops moving.
从连续捕获的视频流中提取视频片段,以识别到面部特征的时间点之前的固定时长的时间点为视频片段第一帧,以第一帧时间点之后固定时长的时间点为结束帧。对不同的面部特征识别主体分别提取视频片段,所提取的视频片段与面部特征识别标识对应保存。生成的视频文件的文件摘要中有多个包含时间标签的面部特征识别标识对应。A video clip is extracted from a continuously captured video stream, and a fixed time point before the time point when the facial feature is recognized is the first frame of the video clip, and a fixed time point after the first frame time point is the end frame. Extract video clips for different facial feature recognition subjects, and save the extracted video clips corresponding to the facial feature recognition identifiers. The file summary of the generated video file corresponds to multiple facial feature recognition identifiers containing time stamps.
上述包含时间标签的面部特征识别标识记录了识别到的面部特征以及对应的时间点信息,上述时间点信息是识别到所述面部特征的时间点信息,该时间点信息是识别不到所述面部特征的时间点信息。通过本实施例可以完成自主拍摄视角由包含有游客的地平视角逐步上升到俯览景区全貌的高空视角,实现常规拍摄手段无法达到的呈现效果。The facial feature recognition identifier containing the time tag records the recognized facial features and the corresponding time point information. The time point information is the time point information at which the facial feature is recognized, and the time point information is that the face cannot be recognized The time point information of the feature. Through this embodiment, the autonomous shooting angle of view can be gradually increased from the horizontal angle of view that includes tourists to the high-altitude angle of view that overlooks the entire scenery of the scenic area, and achieves a presentation effect that cannot be achieved by conventional shooting methods.
实施例十Example ten
图21是根据本申请实施例十的拍摄装置应用场景示意图,图22是根据本申请实施例十的拍摄装置示意图,如图21和图22所示,本实施例可以称为输送带式,可以应用在人员密集的景区热点,在景区热点区域,输送皮带上的视频捕获装置连续地移动拍摄区域内的人物。输送皮带由减速电机驱动,带动着多个视频捕获装置进行拍摄。FIG. 21 is a schematic diagram of an application scenario of a photographing device according to Embodiment 10 of the present application. FIG. 22 is a schematic diagram of a photographing device according to Embodiment 10 of the present application. As shown in FIG. 21 and FIG. 22, this embodiment can be called a conveyor belt type. It is applied to the hot spots of crowded scenic spots. In the hot spots of scenic spots, the video capture device on the conveyor belt continuously moves the people in the shooting area. The conveyor belt is driven by a geared motor, which drives multiple video capture devices to shoot.
本实施例的驱动方式为减速电机和转轮,轨迹为皮带循环,触发方式为总控制开关,移动形式为输送带循环。总控制开关闭合后,视频捕获装置沿吊绳上下移动;总控制开关断开后,视频捕获装置停止移动。The driving mode of this embodiment is a reduction motor and a runner, the trajectory is a belt loop, the trigger mode is a master control switch, and the movement mode is a conveyor belt loop. After the main control switch is closed, the video capture device moves up and down along the sling; after the main control switch is opened, the video capture device stops moving.
从连续捕获的视频流中提取视频片段,以识别到面部特征的时间点之前的固定时长的时间点为视频片段第一帧,以第一帧时间点之后固定时长的时间点为结束帧。对不同的面部特征识别主体分别提取视频片段,所提取的视频片段与面部特征识别标识对应保存。A video clip is extracted from a continuously captured video stream, and a fixed time point before the time point when the facial feature is recognized is the first frame of the video clip, and a fixed time point after the first frame time point is the end frame. Extract video clips for different facial feature recognition subjects, and save the extracted video clips corresponding to the facial feature recognition identifiers.
提取的视频文件文件名映射于数据库中,数据库中所述视频文件与面部特征识别标识对应。The file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
在输送带上安装多部移动式捕获装置,可以对人员密集区域进行高效的视频移动捕获,对人物实现多角度。Install multiple mobile capture devices on the conveyor belt, which can efficiently capture video movement in crowded areas and realize multiple angles of people.
实施例十一Example 11
图23是根据本申请实施例十一的拍摄装置应用场景示意图,图24是根据本申请实施例十一的拍摄装置示意图,如图23和图24所示,本实施例可以称为剪刀升降型,本实施例可以应用在临时景点,在本实施例中视频捕获装置被临时安置在临时景点处对目标人物进行由下而上的拍摄。液压站带动油缸驱动剪刀叉结构的升降装置作上下往复移动。FIG. 23 is a schematic diagram of an application scenario of a photographing device according to Embodiment 11 of the present application, and FIG. 24 is a schematic diagram of a photographing device according to Embodiment 11 of the present application. As shown in FIG. 23 and FIG. 24, this embodiment may be called a scissor lift type This embodiment can be applied to temporary scenic spots. In this embodiment, the video capture device is temporarily placed at the temporary scenic spot to shoot the target person from the bottom up. The hydraulic station drives the oil cylinder to drive the lifting device of the scissor fork structure to reciprocate up and down.
本实施例的驱动方式为油压,轨迹为编程轨迹,触发方式为总控制开关,移动形式为可伸缩升降结构,例如剪刀叉升降。总控制开关闭合后,视频捕获装置沿吊绳上下移动;总控制开关断开后,视频捕获装置停止移动。The driving mode of this embodiment is oil pressure, the trajectory is a programmed trajectory, the trigger mode is a master control switch, and the movement form is a retractable lifting structure, such as a scissor fork lifting. After the main control switch is closed, the video capture device moves up and down along the sling; after the main control switch is opened, the video capture device stops moving.
从连续捕获的视频流中提取视频片段,以移动式视频获取装置上的RFID读取器接收到射频信息标识的时间点前固定时长时间点为视频片段第一帧,以移动式视频获取装置上的RFID读取器接收不到射频信息标识的时间点后固定时长时间点为视频片段的结束帧。Extract video fragments from the continuously captured video stream, and use the RFID reader on the mobile video acquisition device to receive the radio frequency information identification time point before the fixed time point is the first frame of the video fragment, and use it on the mobile video acquisition device The fixed long time point after the RFID reader fails to receive the radio frequency information identification is the end frame of the video clip.
提取的视频文件文件名映射于数据库中,数据库中所述视频文件与RFID射频信息 标识对应。The file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the RFID radio frequency information identification.
通过超高频段RFID卡,移动式获取视频装置可以对10m以内的用户进行视频捕获,可以完成自主拍摄视角由包含有游客的地平视角逐步上升到俯览景区全貌的高空视角,增强针对用户拍摄的呈现效果。移动式获取视频装置可以根据景区的临时需求随时拖到临时位置执行视频捕获的任务。Through the ultra-high frequency band RFID card, the mobile video capture device can capture the video of users within 10m, and can complete the autonomous shooting angle from the horizontal angle of the tourist to the high-altitude angle of view of the entire scenic spot, enhancing the user-oriented shooting Present the effect. The mobile video acquisition device can be dragged to a temporary location at any time to perform the task of video capture according to the temporary needs of the scenic spot.
实施例十二Example 12
图25是根据本申请实施例十二的拍摄装置应用场景示意图,图26是根据本申请实施例十二的拍摄装置示意图,如图25和图26所示,本实施例可以称为多节杆型伸缩机构,本实施例可以应用在景区入口,在本实施中目标人物通过光电开关感应区域,响应于光电开关感应信号,气缸执行作上升移动,驱动视频捕获装置对目标人物进行拍摄。立杆顶端设置有行程开关,行程开关信号触发气缸执行下降移动。本实施例为空压机、油水分离器、电磁换向阀、气缸组成的动力气系统,驱动视频捕获装置作上下移动。FIG. 25 is a schematic diagram of an application scenario of a photographing device according to the twelfth embodiment of the present application, and FIG. 26 is a schematic diagram of a photographing device according to the twelfth embodiment of the present application. As shown in FIG. 25 and FIG. 26, this embodiment may be called a multi-section rod This embodiment can be applied to the entrance of a scenic spot. In this embodiment, the target person passes through the photoelectric switch sensing area, and in response to the photoelectric switch sensing signal, the cylinder performs an upward movement to drive the video capture device to shoot the target person. A travel switch is arranged at the top of the vertical pole, and the travel switch signal triggers the cylinder to perform a downward movement. This embodiment is a power air system composed of an air compressor, an oil-water separator, an electromagnetic reversing valve, and an air cylinder, which drives the video capture device to move up and down.
在本实施例中驱动方式为气缸,轨迹为往返,触发方式为光电开关,移动形式为多节气缸。移动式视频获取装置设置于多节气缸顶端,气动管路电磁换向阀控制多节气缸伸缩,电磁换向阀响应于对射式光电开关的断开信号,控制气缸弹伸;电磁换向阀响应于行程开关的闭合信号,控制气缸收缩。In this embodiment, the driving mode is an air cylinder, the trajectory is reciprocating, the trigger mode is a photoelectric switch, and the movement mode is a multi-section air cylinder. The mobile video acquisition device is set on the top of the multi-section cylinder. The pneumatic pipeline electromagnetic reversing valve controls the expansion and contraction of the multi-section cylinder. The electromagnetic reversing valve responds to the off signal of the optoelectronic switch to control the elasticity of the cylinder; the electromagnetic reversing valve In response to the closing signal of the limit switch, the cylinder is controlled to contract.
移动式视频获取装置响应于对射式光电开关的断开信号的时间点开始捕获视频片段的第一帧,响应于行程开关的闭合信号的时间点后固定时长的时间点为捕获视频片段的结束帧。对所述生成的视频片段与胸牌特征识别标识对应保存。生成的视频文件的文件摘要中识别到的胸牌图像特征标识。The mobile video acquisition device starts to capture the first frame of the video clip at the time when the through-beam photoelectric switch is turned off, and the time point after the time when the travel switch is closed is the end of the captured video clip. frame. The generated video clips are saved correspondingly to the feature identification marks of the badge. The image feature identification of the badge identified in the file summary of the generated video file.
通过游客进入景区入口触发多节气缸弹伸并开启执行移动获取视频操作,可以完成自主拍摄视角由包含有游客的地平视角逐步上升到俯览景区全貌的高空视角,实现常规拍摄手段无法达到的呈现效果。By the tourists entering the entrance of the scenic spot, the multi-section cylinder is triggered to stretch out and start to execute the mobile acquisition video operation, and the autonomous shooting angle can be gradually increased from the horizontal angle of view containing tourists to the high-altitude angle of view of the panoramic view of the scenic area, realizing a presentation that cannot be achieved by conventional shooting methods effect.
实施例十三Embodiment 13
图27是根据本申请实施例十三的拍摄装置应用场景示意图,图28是根据本申请实施例十三的拍摄装置示意图,如图27和图28所示,本实施例可以称为双向滑台。本实施例可以应用在艺术廊道,在目标人物通过艺术廊道,移动式视频获取装置在双向滑台的驱动下跟踪拍摄。第一道轨上设置有供电排线、齿条,分别与取电碳刷、第一伺服电机输出轴齿轮相啮合。第二道轨上设置于第一滑台上,第二道轨设置有齿条,与第二伺服电机输出轴齿轮相啮合。第一伺服电机与第二伺服电机之间设置有连线, 包裹于防护拖链内。视频捕获装置设置于第二滑台上。FIG. 27 is a schematic diagram of an application scenario of a photographing device according to Embodiment 13 of the present application. FIG. 28 is a schematic diagram of a photographing device according to Embodiment 13 of the present application. As shown in FIG. 27 and FIG. 28, this embodiment may be called a two-way slide . This embodiment can be applied to an art gallery. When a target person passes through the art gallery, the mobile video acquisition device is driven by a two-way slide to track and shoot. A power supply cable and a rack are arranged on the first rail, which respectively mesh with the power-taking carbon brush and the output shaft gear of the first servo motor. The second track is arranged on the first sliding table, and the second track is arranged with a rack gear which meshes with the gear of the output shaft of the second servo motor. A connection line is provided between the first servo motor and the second servo motor, and is wrapped in the protective drag chain. The video capture device is arranged on the second sliding table.
本实施例的驱动方式为同步带和同步轮,轨迹为跟踪-编程轨迹,触发方式为人像分析,移动形式为滑轨。在本实施例中移动式视频获取装置被设置有对多套对应于艺术品多角度拍摄的预定轨迹编程。The driving mode of this embodiment is a synchronous belt and a synchronous wheel, the trajectory is a tracking-programming trajectory, the trigger mode is a portrait analysis, and the movement form is a sliding rail. In this embodiment, the mobile video acquisition device is configured to program multiple sets of predetermined trajectories corresponding to the multi-angle shooting of the artwork.
移动式视频获取装置响应于后台服务器的视频图像人像分析,跟踪获取人物活动视频固定时长后执行人物所注视艺术品或人物附近艺术品的的预定轨迹编程,预定轨迹编程执行结束后继续响应于后台服务器的视频图像人像分析跟踪人物固定时长。The mobile video acquisition device responds to the portrait analysis of the video image of the back-end server, tracks and acquires the activity video of the person for a fixed period of time and executes the predetermined trajectory programming of the artwork that the person looks at or the artwork near the person, and continues to respond to the background after the execution of the predetermined trajectory programming is completed. The server's video image portrait analysis tracks people for a fixed period of time.
从连续捕获的视频流中提取视频片段,以视频图像分析中人物进入目标区域的时间点延后固定时长的时间点为视频片段第一帧,以视频图像分析中人物离开目标区域的时间点前固定时长的时间点为视频片段的结束帧。对不同的面部特征识别主体分别提取视频片段,对所述提取的视频片段与面部特征识别标识对应保存。生成的视频文件文件名映射于数据库中,数据库中所述视频文件与面部特征识别标识对应。Extract video clips from the continuously captured video stream, take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point The fixed time point is the end frame of the video clip. Extract video segments from different facial feature recognition subjects, and save the extracted video segments corresponding to the facial feature recognition identifiers. The file name of the generated video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
通过跟踪目标人物移动和预定编程轨迹的相互切换,可以实现拍摄主体在人物与拍摄景物之间的自然切换、交替呈现的效果。By tracking the movement of the target person and switching between the predetermined programmed trajectory, it is possible to achieve the effect of natural switching and alternate presentation of the subject between the person and the shooting scene.
实施例十四Example Fourteen
图29是根据本申请实施例十四的拍摄装置应用场景示意图,图30是根据本申请实施例十四的拍摄装置示意图,如图29和图30所示,本实施例可以称为单转轴摇臂,本实施例可以应用在水上乐园,视频捕获装置被设置在蓄水桶底端,如果蓄水桶内达到预定水位,视频捕获装置随着蓄水桶的重心偏移掀翻向上移动;蓄水桶内的水被排空后,视频捕获装置随着蓄水桶的归位向下移动。FIG. 29 is a schematic diagram of an application scenario of a photographing device according to Embodiment 14 of the present application. FIG. 30 is a schematic diagram of a photographing device according to Embodiment 14 of the present application. As shown in FIG. 29 and FIG. This embodiment can be applied to a water park. The video capture device is set at the bottom end of the water storage bucket. If the water level in the water storage bucket reaches a predetermined water level, the video capture device will move upward as the center of gravity of the water storage bucket shifts; After the water in the bucket is emptied, the video capture device moves down with the return of the water bucket.
在本实施例中,拍摄状可以包括:水桶、转轴、水阀、视频捕获装置、配重,本实施例中的驱动方式为水能,轨迹为水压往返,触发方式为水流,移动形式为单转轴旋转,有水流的情况下,蓄水桶相对于转轴执行重复摆动,蓄水桶一端的移动式视频获取装置以转轴为中心作上下圆弧移动。In this embodiment, the shooting state may include: a bucket, a rotating shaft, a water valve, a video capture device, and a counterweight. In this embodiment, the driving mode is water energy, the trajectory is water pressure back and forth, the trigger mode is water flow, and the movement mode is When a single rotating shaft rotates and there is water flow, the water storage bucket performs repeated swings relative to the rotating shaft, and the mobile video acquisition device at one end of the water storage bucket moves up and down in an arc with the rotating shaft as the center.
从连续捕获的视频流中提取视频片段,以视频图像分析中人物进入目标区域的时间点延后固定时长的时间点为视频片段第一帧,以视频图像分析中人物离开目标区域的时间点前固定时长的时间点为视频片段的结束帧。对所述提取的视频片段与面部特征识别标识对应保存。提取的视频文件文件名映射于数据库中,数据库中所述视频文件与面部特征识别标识对应。Extract video clips from the continuously captured video stream, take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point The fixed time point is the end frame of the video clip. The extracted video clips are saved corresponding to the facial feature recognition identifiers. The file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
通过本实施例可以完成自主拍摄视角由包含有游客的地平视角逐步上升到俯览景 区全貌的高空视角,实现常规拍摄手段无法达到的呈现效果。Through this embodiment, the autonomous shooting angle of view can be gradually increased from the horizontal angle of view containing tourists to the high-altitude angle of view of the overview of the panoramic view area, realizing a presentation effect that cannot be achieved by conventional shooting methods.
实施例十五Example 15
图31是根据本申请实施例十五的拍摄装置应用场景示意图,图32是根据本申请实施例十五的拍摄装置示意图,如图31和图32所示,本实施例可以称为转盘式,本实施例可以应用在水上乐园,多个视频捕获装置被设置在旋转水车轮上,多个视频捕获装置同时进行连续拍摄。本实施例的拍摄装置可以包括:水车、水阀、视频捕获装置、配重。FIG. 31 is a schematic diagram of an application scenario of a photographing device according to Embodiment 15 of the present application. FIG. 32 is a schematic diagram of a photographing device according to Embodiment 15 of the present application. As shown in FIG. 31 and FIG. 32, this embodiment may be called a turntable type. This embodiment can be applied to a water park, where multiple video capture devices are arranged on a rotating water wheel, and multiple video capture devices perform continuous shooting at the same time. The photographing device in this embodiment may include: a waterwheel, a water valve, a video capture device, and a counterweight.
本实施例的驱动方式为水能,轨迹为循环,触发方式为水流,移动方式为单转轴旋转。有水流的情况下,蓄水桶相带动转轮执行圆周运动,设置于转轮上的多个移动式视频获取装置相对于转轴作圆周循环移动。The driving mode of this embodiment is water energy, the trajectory is cyclic, the trigger mode is water flow, and the movement mode is single-rotation shaft rotation. In the case of water flow, the water storage bucket phase drives the runner to perform circular movement, and a plurality of mobile video acquisition devices arranged on the runner move circularly with respect to the rotating shaft.
从连续捕获的视频流中提取视频片段,以视频图像分析中人物进入目标区域的时间点延后固定时长的时间点为视频片段第一帧,以视频图像分析中人物离开目标区域的时间点前固定时长的时间点为视频片段的结束帧。对不同的面部特征识别主体分别提取视频片段,对所述提取的视频片段与面部特征识别标识对应保存。提取的视频文件文件名映射于数据库中,数据库中所述视频文件与面部特征识别标识对应。Extract video clips from the continuously captured video stream, take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point The fixed time point is the end frame of the video clip. Extract video segments from different facial feature recognition subjects, and save the extracted video segments corresponding to the facial feature recognition identifiers. The file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
通过本实施例可以完成自主拍摄视角由包含有游客的地平视角逐步上升到俯览景区全貌的高空视角,实现常规拍摄手段无法达到的呈现效果。Through this embodiment, the autonomous shooting angle of view can be gradually increased from the horizontal angle of view that includes tourists to the high-altitude angle of view that overlooks the entire scenery of the scenic area, and achieves a presentation effect that cannot be achieved by conventional shooting methods.
实施例十六Example Sixteen
图33是根据本申请实施例十六的拍摄装置应用场景示意图,图34是根据本申请实施例十六的拍摄装置示意图,如图33和图34所示,本实施例可以称为多转轴摇臂,本实施例可以用于旋梯,视频捕获装置跟踪走在旋梯上的目标人物移动拍摄。第一步进电机驱动第一摇臂旋转,第二步进电机驱动第二摇臂旋转。第二摇臂转轴设置于第一摇臂上。FIG. 33 is a schematic diagram of an application scenario of a photographing device according to the sixteenth embodiment of the present application, and FIG. 34 is a schematic diagram of a photographing device according to the sixteenth embodiment of the present application. As shown in FIG. 33 and FIG. 34, this embodiment may be referred to as a multi-axis pan The arm, this embodiment can be used for a spiral ladder, and the video capture device tracks the movement of the target person walking on the spiral ladder. The first stepping motor drives the first rocker arm to rotate, and the second stepping motor drives the second rocker arm to rotate. The second rocker arm rotating shaft is arranged on the first rocker arm.
在本实施例中,驱动方式为步进电机,轨迹为跟踪轨迹,触发方式为人像分析,移动形式为前期关节旋转。转盘和摇臂响应于后台服务器的视频图像人像分析,人物进入目标区域内后摇臂转盘使视频获取装置跟踪于人物头部位置移动。In this embodiment, the driving mode is a stepping motor, the trajectory is a tracking trajectory, the trigger mode is a portrait analysis, and the movement mode is an early joint rotation. The turntable and the rocker arm respond to the portrait analysis of the video image of the background server, and the rocker turntable makes the video acquisition device track the movement of the head position of the person after the person enters the target area.
移动式视频获取装置响应于后台服务器对视频图像的人像分析,提取人像出现在预定区域的时间点前固定时长的时间点为视频片段的第一帧,人像离开预定区域的时间点为视频片段的结束帧。对所述生成的视频片段与面部特征识别标识对应保存。生成的视频文件文件名映射于数据库中,数据库中所述视频文件与面部特征识别标识对 应。In response to the portrait analysis of the video image by the back-end server, the mobile video acquisition device extracts a fixed time point before the time point when the portrait appears in the predetermined area as the first frame of the video clip, and the time point when the portrait leaves the predetermined area as the video clip End frame. Save the generated video clip corresponding to the facial feature recognition identifier. The file name of the generated video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
通过本实施例移动式视频获取装置通过多个关节的自由旋转,跟踪目标人物拍摄,实现常规拍摄手段无法达到的呈现效果。Through the free rotation of multiple joints, the mobile video acquisition device of this embodiment tracks the target person for shooting, and achieves a presentation effect that cannot be achieved by conventional shooting means.
实施例十七Example Seventeen
图35是根据本申请实施例十七的拍摄装置应用场景示意图一,图36是根据本申请实施例十七的拍摄装置应用场景示意图二,图37是根据本申请实施例十七的拍摄装置示意图,如图35、图36和图37所示,本实施例可以称为滑台摇臂组合,本实施例可以应用在景区迎宾道,在景区迎宾道上设置一台迎宾装置,用于挥舞旗杆,迎宾装置上同时设置有有视频捕获装置。该迎宾装置由第一气缸控制剪刀叉机构在滑台上伸缩运动,第二气缸控制摇臂上下摇摆运动。本实施例中的拍摄装置可以包括:气泵、油水分离器、第一气缸、第一电磁换向阀、第二电磁换向阀第二气缸、第三电磁换向阀、第四电磁换向阀。35 is a schematic diagram of the first application scenario of a photographing device according to the seventeenth embodiment of the application, FIG. 36 is a schematic diagram of the second application scenario of the photographing device according to the seventeenth embodiment of the present application, and FIG. 37 is a schematic diagram of the photographing device according to the seventeenth embodiment of the present application , As shown in Figure 35, Figure 36 and Figure 37, this embodiment can be called a sliding table rocker combination. This embodiment can be applied to the welcome lane of a scenic spot. A welcome device is installed on the welcome lane of the scenic spot. Waving the flagpole, the welcoming device is also equipped with a video capture device. In the welcome device, the first cylinder controls the telescopic movement of the scissor and fork mechanism on the sliding table, and the second cylinder controls the rocker arm to swing up and down. The photographing device in this embodiment may include: an air pump, an oil-water separator, a first cylinder, a first electromagnetic directional valve, a second electromagnetic directional valve, a second cylinder, a third electromagnetic directional valve, and a fourth electromagnetic directional valve .
本实施例的驱动方式为气缸,轨迹为随机无序轨迹,触发方式为总控制开关,移动形式为滑台直线往返和单关节旋转。滑台被设定为第一气缸的伸缩控制,摇臂被设定为第二气缸的伸缩控制。第一电磁换向阀控制第一气缸的弹伸,第二电磁换向阀控制第一气缸的收缩,第一电磁换向阀和第二电磁换向阀被设定为在5-20秒内随机选择某一时间作为两个换向阀指令交替执行控制第一气缸操作时间。例如,随机选择8秒为第一时序,第一电磁换向阀向气缸通气,第二电磁换向阀向外排气,实现滑台向一端移动;8秒结束后进入第二时序;随机选择17秒为第二时序,第一电磁换向阀和第二电磁换向阀执行指令互换,第一电磁换向阀向外排气,第二电磁换向阀向第一气缸通气,实现滑台向相反方向移动,17秒结束后进行第三时序。The driving mode of this embodiment is an air cylinder, the trajectory is a random and disordered trajectory, the trigger mode is a master control switch, and the movement form is a linear reciprocating and single-joint rotation of the sliding table. The slide table is set to the expansion control of the first cylinder, and the rocker arm is set to the expansion control of the second cylinder. The first solenoid reversing valve controls the extension of the first cylinder, the second solenoid reversing valve controls the contraction of the first cylinder, the first solenoid reversing valve and the second solenoid reversing valve are set within 5-20 seconds Randomly select a certain time as the two reversing valve commands to alternately execute and control the operating time of the first cylinder. For example, randomly select 8 seconds as the first time sequence, the first electromagnetic reversing valve ventilates the cylinder, and the second electromagnetic reversing valve exhausts outwards, so that the sliding table moves to one end; after 8 seconds, the second time sequence is entered; random selection 17 seconds is the second time sequence. The first solenoid reversing valve and the second solenoid reversing valve execute commands interchange, the first solenoid reversing valve exhausts outwards, and the second solenoid reversing valve ventilates the first cylinder to achieve sliding The stage moves in the opposite direction, and the third sequence is performed after 17 seconds.
第三电磁换向阀控制第二气缸的弹伸,第四电磁换向阀控制第二气缸的收缩,第三电磁换向阀和第四电磁换向阀被设定为在3-12秒内随机选择某一时间作为两个换向阀指令交替执行控制第二气缸操作时间。例如,随机选择5秒为第一时序,第三电磁换向阀向气缸通气,第四电磁换向阀向外排气,实现摇臂向上移动;5秒结束后进入第二时充;随机选择9秒为第二时序,第三电磁换向阀和第四电磁换向阀执行指令互换,第三电磁换向阀向外排气,第四电磁换向阀向第二气缸通气,实现摇臂向下移动,9秒结束后进行第三时序。The third solenoid reversing valve controls the extension of the second cylinder, the fourth solenoid reversing valve controls the contraction of the second cylinder, the third solenoid reversing valve and the fourth solenoid reversing valve are set within 3-12 seconds Randomly select a certain time as the two reversing valve commands to alternately execute and control the operating time of the second cylinder. For example, randomly select 5 seconds as the first time sequence, the third electromagnetic reversing valve ventilates the cylinder, and the fourth electromagnetic reversing valve exhausts outwards to realize the upward movement of the rocker arm; after 5 seconds, it enters the second time charge; random selection 9 seconds is the second time sequence. The third solenoid reversing valve and the fourth solenoid reversing valve execute commands interchange, the third solenoid reversing valve exhausts outward, and the fourth solenoid reversing valve ventilates the second cylinder to achieve shaking. The arm moves down, and the third sequence is performed after 9 seconds.
总控制开关闭合后,视频捕获装置跟随滑台和摇臂作随机移动;总控制开关断开后,视频捕获装置停止移动。After the main control switch is closed, the video capture device follows the sliding table and rocker arm to move randomly; after the main control switch is turned off, the video capture device stops moving.
从连续捕获的视频流中提取视频片段,以识别到面部特征的时间点之前的固定时 长的时间点为视频片段第一帧,以第一帧时间点之后固定时长的时间点为结束帧。对不同的面部特征识别主体分别提取视频片段,所提取的视频片段与面部特征识别标识对应保存。提取的视频文件文件名映射于数据库中,数据库中所述视频文件与面部特征识别标识对应。Extract a video segment from a continuously captured video stream, and use a fixed time point before the facial feature recognition time point as the first frame of the video segment, and a fixed time point after the first frame time point as the end frame. Extract video clips for different facial feature recognition subjects, and save the extracted video clips corresponding to the facial feature recognition identifiers. The file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
通过本实施例可以完成自主拍摄视角由包含有游客的地平视角逐步上升到俯览景区全貌的高空视角,实现常规拍摄手段无法达到的呈现效果。Through this embodiment, the autonomous shooting angle of view can be gradually increased from the horizontal angle of view that includes tourists to the high-altitude angle of view that overlooks the entire scenery of the scenic area, and achieves a presentation effect that cannot be achieved by conventional shooting methods.
实施例十八Embodiment 18
根据本申请实施例的另外一个方面,还提供了一种用于执行上述实施例一种的视频处理方法的装置,图38是根据本申请实施例的视频处理装置的示意图,如图38所示,该视频处理装置包括:第一接收单元3801,第一获取单元3803,提取单元3805。下面对该视频处理装置进行详细说明。According to another aspect of the embodiments of the present application, there is also provided a device for executing the video processing method of one of the foregoing embodiments. FIG. 38 is a schematic diagram of the video processing device according to the embodiment of the present application, as shown in FIG. 38 , The video processing device includes: a first receiving unit 3801, a first acquiring unit 3803, and an extracting unit 3805. The video processing device will be described in detail below.
第一接收单元3801,用于接收触发信号,其中,触发信号用于触发摄像装置进行拍摄,摄像装置中包括驱动装置和摄像头,驱动装置用于驱动摄像装置中的摄像头移动;摄像头用于在移动的过程中进行视频拍摄。The first receiving unit 3801 is used to receive a trigger signal, where the trigger signal is used to trigger the camera device to shoot, the camera device includes a driving device and a camera, the driving device is used to drive the camera in the camera to move; the camera is used to move Video shooting during the process.
第一获取单元3803,用于获取摄像头在触发信号的触发下拍摄的视频。The first acquisition unit 3803 is configured to acquire a video captured by the camera under the trigger of a trigger signal.
提取单元3805,用于提取视频中的人所对应的身份标识信息,并将身份标识信息与视频的对应关系进行保存。The extraction unit 3805 is configured to extract the identity information corresponding to the person in the video, and save the corresponding relationship between the identity information and the video.
此处需要说明的是,上述第一接收单元3801,第一获取单元3803,提取单元3805对应于实施例一中的步骤S102至S106,上述单元与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例一所公开的内容。需要说明的是,上述单元作为装置的一部分可以在诸如一组计算机可执行指令的计算机系统中执行。It should be noted here that the above-mentioned first receiving unit 3801, first obtaining unit 3803, and extracting unit 3805 correspond to steps S102 to S106 in the first embodiment, and the above-mentioned units and corresponding steps implement the same examples and application scenarios. But it is not limited to the content disclosed in the first embodiment. It should be noted that, as a part of the device, the above-mentioned units can be executed in a computer system such as a set of computer-executable instructions.
由上可知,在本申请上述实施例中,可以利用第一接收单元接收触发信号,其中,触发信号用于触发摄像装置进行拍摄,摄像装置中包括驱动装置和摄像头,驱动装置用于驱动摄像装置中的摄像头移动;摄像头用于在移动的过程中进行视频拍摄;然后利用第一获取单元获取摄像头在触发信号的触发下拍摄的视频;接着利用提取单元提取视频中的人所对应的身份标识信息,并将身份标识信息与视频的对应关系进行保存。通过本申请中的视频处理装置,实现了预先将包含有用户的视频与从视频出识别的人物的身份标识信息进行对应保存,并根据用户的身份标识信息从预先存储的视频库中查询得到用户的视频的目的,达到了提高视频处理方式的可靠性的技术效果,提升了用户体验,进而解决了相关技术中的视频处理方式可靠性较低的技术问题。It can be seen from the above that, in the above-mentioned embodiments of the present application, the first receiving unit can be used to receive the trigger signal, where the trigger signal is used to trigger the camera device to shoot, and the camera device includes a driving device and a camera, and the driving device is used to drive the camera device. The camera moves in the mobile; the camera is used for video shooting during the movement; then the first acquisition unit is used to acquire the video captured by the camera under the trigger of the trigger signal; then the extraction unit is used to extract the identity information corresponding to the person in the video , And save the corresponding relationship between the identity information and the video. Through the video processing device in this application, it is realized that the video containing the user and the identification information of the person recognized from the video are correspondingly saved in advance, and the user is obtained from the pre-stored video library according to the user's identification information. The purpose of the video is to achieve the technical effect of improving the reliability of the video processing method, improve the user experience, and then solve the technical problem of low reliability of the video processing method in the related technology.
在一种可选的实施例中,视频处理装置还包括:感应单元,用于在接收到触发信号之前,通过摄像装置上设置的传感器感应到有人出现在摄像装置的拍摄范围内;第一响应单元,用于响应于传感器的感应,发出触发信号。In an optional embodiment, the video processing device further includes: a sensing unit configured to sense that someone is present in the shooting range of the camera device through a sensor provided on the camera device before receiving the trigger signal; The unit is used to send out a trigger signal in response to the sensing of the sensor.
在一种可选的实施例中,传感器为以下至少之一:红外感应单元,射频感应单元,雷达探测单元。In an optional embodiment, the sensor is at least one of the following: an infrared sensing unit, a radio frequency sensing unit, and a radar detection unit.
在一种可选的实施例中,视频处理装置还包括:第二接收单元,用于在接收到触发信号之前,接收用户对摄像装置中的开关的操作;第二响应单元,用于响应于操作,发出触发信号。In an optional embodiment, the video processing device further includes: a second receiving unit, configured to receive a user's operation of a switch in the camera device before receiving the trigger signal; and a second response unit, configured to respond to Operate to issue a trigger signal.
在一种可选的实施例中,开关为以下至少之一:按键开关单元,触摸开关单元,光电开关单元。In an optional embodiment, the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
在一种可选的实施例中,视频处理装置还包括:第三接收单元,用于在接收到触发信号之前,接收用户在软件界面上的操作;第三响应单元,用于响应于操作,通过网络向摄像装置发出触发信号。In an optional embodiment, the video processing device further includes: a third receiving unit, configured to receive a user's operation on the software interface before receiving the trigger signal; and a third response unit, configured to respond to the operation, Send a trigger signal to the camera via the network.
在一种可选的实施例中,装置还包括:第三获取单元,用于在接收用户在软件界面上的操作之前,通过扫描设置在摄像装置上的图形化码获取摄像装置的标识信息;显示单元,用于根据标识信息在软件界面上显示能够对摄像装置所进行的操作。In an optional embodiment, the device further includes: a third obtaining unit, configured to obtain the identification information of the camera device by scanning a graphical code set on the camera device before receiving the user's operation on the software interface; The display unit is used to display the operations that can be performed on the camera device on the software interface according to the identification information.
在一种可选的实施例中,视频处理装置还包括:第四获取单元,用于在接收用户在软件界面上的操作之前,在软件界面显示人的手持设备的情况下,获取手持设备的地理位置信息;显示单元,用于根据地理位置信息在软件界面上显示距离手持设备预定范围内的人能够控制的摄像装置,以及能过对摄像装置所进行的操作。In an optional embodiment, the video processing apparatus further includes: a fourth obtaining unit, configured to obtain the information of the handheld device when the software interface displays the human handheld device before receiving the user's operation on the software interface Geographic location information; a display unit for displaying on the software interface the camera devices that can be controlled by a person within a predetermined range from the handheld device and the operations that can be performed on the camera device on the software interface according to the geographic location information.
可选地,视频处理装置还包括:第二获取单元,用于在从视频中提取视频中的人所对应的身份标识信息,并将身份标识信息与视频的对应关系进行保存之后,获取待提取视频的身份识别信息,并在已经保存的视频中查找与待提取视频的身份识别信息所对应的一个或多个视频;显示单元,用于将一个或多个视频显示给待提取视频的身份识别信息所对应的用户。Optionally, the video processing device further includes: a second acquiring unit, configured to extract the identity information corresponding to the person in the video from the video, and after storing the corresponding relationship between the identity information and the video, acquire the to-be-extracted The identification information of the video, and search for one or more videos corresponding to the identification information of the video to be extracted in the saved videos; the display unit is used to display one or more videos to the identification of the video to be extracted The user corresponding to the message.
在一种可选的实施例中,提取单元,用于从人中识别出人身上的附着物和/或人的生物特征;将附着物的特征信息和/或生物特征的特征信息作为用于标识该人的身份标识信息;第二获取单元,用于获取人的附着物的特征信息和/或人的生物特征的身份标识信息,将附着物的特征信息和/或生物特征对应的特征信息确定为人的身份标识信息;其中,附着物包括以下至少之一:服装、饰品、手持物品;生物特征包括以下至少之一:面部特征,体态特征;附着物用于在预定区域唯一标识人物。In an optional embodiment, the extraction unit is used to identify the attachments and/or the biological characteristics of the person from the person; the characteristic information of the attachments and/or the characteristic information of the biological characteristics are used as the Identify the identity information of the person; the second acquisition unit is used to acquire the characteristic information of the attachments of the person and/or the identity information of the biological characteristics of the person, and the characteristic information of the attachments and/or the characteristic information corresponding to the biological characteristics Identification information determined as a person; wherein the attachment includes at least one of the following: clothing, accessories, and hand-held objects; the biological characteristics include at least one of the following: facial features, posture characteristics; the attachment is used to uniquely identify a person in a predetermined area.
可选地,提取单元,还用于从射频信号中提取感应到的射频标识信息;将射频标识信息作为用于标识该人的身份标识信息;从网络触发信号中提取得到标识该人的身份标识信息;第二获取单元,还用于获取射频信号中提取感应到的射频标识信息,将射频标识信息确定为人的身份标识信息;从网络触发信号中提取得到标识该人的身份标识信息。Optionally, the extraction unit is also used to extract the sensed radio frequency identification information from the radio frequency signal; use the radio frequency identification information as the identity information for identifying the person; extract the identity mark that identifies the person from the network trigger signal Information; the second acquisition unit is also used to extract the sensed radio frequency identification information from the radio frequency signal, and determine the radio frequency identification information as the person's identity information; extract the person's identity information from the network trigger signal.
在一种可选的实施例中,在视频中的人的数量为多个的情况下,提取单元包括:第一识别模块,用于从多个人中识别出每一个人身上的附着物和/或生物特征;第一保存模块,用于将每一个人身上的附着物的特征信息和/或生物特征对应的特征信息确定为多个人中每一个人的身份标识信息,并将多个人中每一个人的身份识别信息与视频的对应关系进行保存。In an optional embodiment, when the number of people in the video is multiple, the extraction unit includes: a first recognition module, configured to recognize attachments and/or attachments on each person from the multiple people Or biological characteristics; the first storage module is used to determine the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each of the multiple persons, and the identification information of each of the multiple persons The corresponding relationship between a person's identification information and the video is saved.
在一种可选的实施例中,第一保存模块包括:第一确定子模块,用于确定多个人中每一个人的身份识别信息在视频中被识别出的时间节点;第二确定子模块,用于将时间节点作为多个人中每一个人的身份识别信息的时间标签;保存子模块,用于将多个人中每一个人的身份识别信息以及为多个人中每一个人的身份识别信息添加的时间标签与视频的对应关系进行保存。In an optional embodiment, the first saving module includes: a first determining sub-module, configured to determine the time node at which the identification information of each of the multiple people is recognized in the video; and a second determining sub-module , Used to use the time node as the time label of the identification information of each of multiple people; save sub-module, used to identify the identification information of each of multiple people and the identification information of each of multiple people The corresponding relationship between the added time label and the video is saved.
在一种可选的实施例中,摄像头的移动轨迹包括以下至少之一:预定起点和预定终点之间的往返移动轨迹,预定路径的循环移动轨迹,基于预定编程程序设计的移动轨迹,跟随目标对象的跟踪移动轨迹。In an optional embodiment, the movement trajectory of the camera includes at least one of the following: a reciprocating movement trajectory between a predetermined starting point and a predetermined end point, a cyclic movement trajectory of a predetermined path, a movement trajectory designed based on a predetermined programming program, and following the target The tracking movement trajectory of the object.
在一种可选的实施例中,摄像头的移动方式为以下至少之一:轨道式移动,旋转式移动。In an optional embodiment, the movement of the camera is at least one of the following: orbital movement and rotational movement.
在一种可选的实施例中,驱动装置的驱动方式为以下至少之一:机械驱动,电磁驱动,压力驱动。In an optional embodiment, the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving.
实施例十九Example Nineteen
根据本申请实施例的另外一个方面,还提供了一种存储介质,存储介质包括存储的程序,其中,程序执行上述中任意一项的视频处理方法。According to another aspect of the embodiments of the present application, a storage medium is also provided. The storage medium includes a stored program, wherein the program executes any one of the above-mentioned video processing methods.
实施例二十Embodiment Twenty
根据本申请实施例的另外一个方面,还提供了一种处理器,处理器用于运行程序,其中,程序运行时执行上述中任意一项的视频处理方法。According to another aspect of the embodiments of the present application, there is also provided a processor, which is configured to run a program, wherein the video processing method of any one of the above is executed when the program is running.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the foregoing embodiments of the present application are for description only, and do not represent the superiority or inferiority of the embodiments.
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above-mentioned embodiments of the present application, the description of each embodiment has its own focus. For a part that is not described in detail in an embodiment, reference may be made to related descriptions of other embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed technical content can be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units may be a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes. .
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。The above are only the preferred embodiments of this application. It should be pointed out that for those of ordinary skill in the art, without departing from the principle of this application, several improvements and modifications can be made, and these improvements and modifications are also Should be regarded as the scope of protection of this application.

Claims (34)

  1. 一种视频处理方法,包括:A video processing method, including:
    接收触发信号,其中,所述触发信号用于触发摄像装置进行拍摄,所述摄像装置中包括驱动装置和摄像头,所述驱动装置用于驱动所述摄像装置中的摄像头移动;所述摄像头用于在移动的过程中进行视频拍摄;A trigger signal is received, where the trigger signal is used to trigger a camera device to shoot, the camera device includes a drive device and a camera, the drive device is used to drive the camera in the camera device to move; the camera is used to Video shooting in the process of moving;
    获取所述摄像头在所述触发信号的触发下拍摄的视频;Acquiring a video captured by the camera under the trigger of the trigger signal;
    从所述视频中提取所述视频中的人所对应的身份标识信息,并将所述身份标识信息与所述视频的对应关系进行保存。Extract the identity information corresponding to the person in the video from the video, and save the corresponding relationship between the identity information and the video.
  2. 根据权利要求1所述的方法,其中,在接收到所述触发信号之前,所述方法还包括:The method according to claim 1, wherein, before receiving the trigger signal, the method further comprises:
    通过所述摄像装置上设置的传感器感应到有人出现在所述摄像装置的拍摄范围内;Sensing that someone appears in the shooting range of the camera device through a sensor provided on the camera device;
    响应于所述传感器的感应,发出所述触发信号。In response to the sensing of the sensor, the trigger signal is issued.
  3. 根据权利要求2所述的方法,其中,所述传感器为以下至少之一:红外感应单元,射频感应单元,雷达探测单元。The method according to claim 2, wherein the sensor is at least one of the following: an infrared sensing unit, a radio frequency sensing unit, and a radar detection unit.
  4. 根据权利要求1所述的方法,其中,在接收到所述触发信号之前,所述方法还包括:The method according to claim 1, wherein, before receiving the trigger signal, the method further comprises:
    接收用户对所述摄像装置中的开关的操作;Receiving a user's operation on a switch in the camera device;
    响应于所述操作,发出所述触发信号。In response to the operation, the trigger signal is issued.
  5. 根据权利要求4所述的方法,其中,所述开关为以下至少之一:按键开关单元,触摸开关单元,光电开关单元。The method according to claim 4, wherein the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
  6. 根据权利要求1所述的方法,其中,在接收到所述触发信号之前,所述方法还包括:The method according to claim 1, wherein, before receiving the trigger signal, the method further comprises:
    接收用户在软件界面上的操作;Receive user operations on the software interface;
    响应于所述操作,通过网络向所述摄像装置发出所述触发信号。In response to the operation, the trigger signal is sent to the camera device via the network.
  7. 根据权利要求4所述的方法,其中,在接收用户在软件界面上的操作之前,所述方法还包括:The method according to claim 4, wherein, before receiving a user's operation on the software interface, the method further comprises:
    通过扫描设置在所述摄像装置上的图形化码获取所述摄像装置的标识信息;Acquiring the identification information of the camera device by scanning a graphical code set on the camera device;
    根据所述标识信息在所述软件界面上显示能够对所述摄像装置所进行的操作。The operations that can be performed on the camera device are displayed on the software interface according to the identification information.
  8. 根据权利要求4所述的方法,其中,在接收用户在软件界面上的操作之前,所述方法还包括:The method according to claim 4, wherein, before receiving a user's operation on the software interface, the method further comprises:
    在所述软件界面显示所述人的手持设备的情况下,获取所述手持设备的地理位置信息;In the case where the software interface displays the handheld device of the person, acquiring geographic location information of the handheld device;
    根据所述地理位置信息在所述软件界面上显示距离所述手持设备预定范围内的所述人能够控制的所述摄像装置,以及能过对所述摄像装置所进行的操作。According to the geographic location information, display on the software interface the camera devices that can be controlled by the person within a predetermined range from the handheld device and the operations that can be performed on the camera device.
  9. 根据权利要求1所述的方法,其中,在从所述视频中提取所述视频中的人所对应的身份标识信息,并将所述身份标识信息与所述视频的对应关系进行保存之后,所述方法还包括:The method according to claim 1, wherein after extracting the identification information corresponding to the person in the video from the video, and storing the corresponding relationship between the identification information and the video, The method also includes:
    获取待提取视频的身份识别信息,并在已经保存的视频中查找与所述待提取视频的身份识别信息所对应的一个或多个视频;Obtain the identification information of the video to be extracted, and search for one or more videos corresponding to the identification information of the video to be extracted in the saved videos;
    将所述一个或多个视频显示给所述待提取视频的身份识别信息所对应的用户。Displaying the one or more videos to the user corresponding to the identification information of the video to be extracted.
  10. 根据权利要求9所述的方法,其中,从所述视频中提取所述视频中的人所对应的身份标识信息包括:从所述人中识别出所述人身上的附着物和/或所述人的生物特征;将所述附着物的特征信息和/或所述生物特征的特征信息作为用于标识该人的身份标识信息;The method according to claim 9, wherein extracting the identification information corresponding to the person in the video from the video comprises: recognizing the attachment on the person from the person and/or the The biological characteristics of a person; the characteristic information of the attachment and/or the characteristic information of the biological characteristic are used as the identification information for identifying the person;
    获取待提取视频的身份识别信息包括:获取所述人的附着物的特征信息和/或所述人的生物特征的身份标识信息,将所述附着物的特征信息和/或所述生物特征对应的特征信息确定为所述人的身份标识信息;其中,所述附着物包括以下至少之一:服装、饰品、手持物品;所述生物特征包括以下至少之一:面部特征,体态特征;所述附着物用于在预定区域唯一标识所述人。Obtaining the identification information of the video to be extracted includes: acquiring the characteristic information of the attachment of the person and/or the identification information of the biological characteristic of the person, and corresponding the characteristic information of the attachment and/or the biological characteristic The characteristic information of is determined as the identification information of the person; wherein, the attachment includes at least one of the following: clothing, accessories, and hand-held objects; the biological characteristics include at least one of the following: facial characteristics, body characteristics; The attachment is used to uniquely identify the person in the predetermined area.
  11. 根据权利要求10所述的方法,其中,从所述视频中提取所述视频中的人所对应的身份标识信息包括:从射频信号中提取感应到的射频标识信息;将所述射频标识信息作为用于标识该人的身份标识信息;从网络触发信号中提取得到标识该人的身份标识信息;The method according to claim 10, wherein extracting the identification information corresponding to the person in the video from the video comprises: extracting the sensed radio frequency identification information from the radio frequency signal; using the radio frequency identification information as The identification information used to identify the person; the identification information that identifies the person is extracted from the network trigger signal;
    获取待提取视频的身份识别信息包括:获取所述射频信号中提取感应到的射频标识信息,将所述射频标识信息确定为所述人的身份标识信息;从所述网络触 发信号中提取得到标识该人的身份标识信息。Obtaining the identification information of the video to be extracted includes: acquiring the radio frequency identification information sensed from the radio frequency signal, determining the radio frequency identification information as the identification information of the person; extracting the identification information from the network trigger signal The identification information of the person.
  12. 根据权利要求11所述的方法,其中,在所述视频中的人的数量为多个的情况下,从所述视频中提取所述视频中的人所对应的身份标识信息包括:The method according to claim 11, wherein, in the case that the number of people in the video is multiple, extracting the identification information corresponding to the people in the video from the video comprises:
    从所述多个人中识别出每一个人身上的附着物和/或生物特征;Identifying attachments and/or biological characteristics of each person from the plurality of persons;
    将所述每一个人身上的附着物的特征信息和/或生物特征对应的特征信息确定为所述多个人中每一个人的身份标识信息,并将所述多个人中每一个人的身份识别信息与所述视频的对应关系进行保存。Determine the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each of the plurality of persons, and identify the identity of each of the plurality of persons The corresponding relationship between the information and the video is saved.
  13. 根据权利要求12所述的方法,其中,将所述多个人中每一个人的身份识别信息与所述视频的对应关系进行保存包括:The method according to claim 12, wherein storing the corresponding relationship between the identification information of each of the plurality of people and the video comprises:
    确定所述多个人中每一个人的身份识别信息在所述视频中被识别出的时间节点;Determine the time node at which the identification information of each of the plurality of people is recognized in the video;
    将所述时间节点作为所述多个人中每一个人的身份识别信息的时间标签;Using the time node as a time label of the identification information of each of the plurality of people;
    将所述多个人中每一个人的身份识别信息以及为所述多个人中每一个人的身份识别信息添加的时间标签与所述视频的对应关系进行保存。The identification information of each of the plurality of people and the corresponding relationship between the time stamp added to the identification information of each of the plurality of people and the video are stored.
  14. 根据权利要求1所述的方法,其中,所述摄像头的移动轨迹包括以下至少之一:预定起点和预定终点之间的往返移动轨迹,预定路径的循环移动轨迹,基于预定编程程序设计的移动轨迹,跟随目标对象的跟踪移动轨迹。The method according to claim 1, wherein the movement trajectory of the camera includes at least one of the following: a reciprocating movement trajectory between a predetermined starting point and a predetermined end point, a cyclic movement trajectory of a predetermined path, and a movement trajectory designed based on a predetermined programming program , Follow the tracking movement trajectory of the target object.
  15. 根据权利要求14所述的方法,其中,所述摄像头的移动方式为以下至少之一:轨道式移动,旋转式移动。The method according to claim 14, wherein the movement of the camera is at least one of the following: orbital movement and rotational movement.
  16. 根据权利要求1所述的方法,其中,所述驱动装置的驱动方式为以下至少之一:机械驱动,电磁驱动,压力驱动。The method according to claim 1, wherein the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving.
  17. 一种视频处理装置,包括:A video processing device includes:
    第一接收单元,设置为接收触发信号,其中,所述触发信号用于触发摄像装置进行拍摄,所述摄像装置中包括驱动装置和摄像头,所述驱动装置用于驱动所述摄像装置中的摄像头移动;所述摄像头用于在移动的过程中进行视频拍摄;The first receiving unit is configured to receive a trigger signal, wherein the trigger signal is used to trigger a camera device to take a picture, the camera device includes a driving device and a camera, and the driving device is used to drive the camera in the camera device Moving; the camera is used for video shooting during the moving process;
    第一获取单元,设置为获取所述摄像头在所述触发信号的触发下拍摄的视频;The first acquiring unit is configured to acquire a video captured by the camera under the trigger of the trigger signal;
    提取单元,设置为从所述视频中提取所述视频中的人所对应的身份标识信息,并将所述身份标识信息与所述视频的对应关系进行保存。The extraction unit is configured to extract the identity information corresponding to the person in the video from the video, and save the corresponding relationship between the identity information and the video.
  18. 根据权利要求17所述的装置,其中,所述装置还包括:The device according to claim 17, wherein the device further comprises:
    感应单元,设置为在接收到所述触发信号之前,通过所述摄像装置上设置的传感器感应到有人出现在所述摄像装置的拍摄范围内;The sensing unit is configured to sense that someone appears in the shooting range of the camera device through a sensor provided on the camera device before receiving the trigger signal;
    第一响应单元,设置为响应于所述传感器的感应,发出所述触发信号。The first response unit is configured to send out the trigger signal in response to the sensing of the sensor.
  19. 根据权利要求18所述的装置,其中,所述传感器为以下至少之一:红外感应单元,射频感应单元,雷达探测单元。The device according to claim 18, wherein the sensor is at least one of the following: an infrared sensor unit, a radio frequency sensor unit, and a radar detection unit.
  20. 根据权利要求17所述的装置,其中,所述装置还包括:The device according to claim 17, wherein the device further comprises:
    第二接收单元,设置为在接收到所述触发信号之前,接收用户对所述摄像装置中的开关的操作;The second receiving unit is configured to receive a user's operation of the switch in the camera device before receiving the trigger signal;
    第二响应单元,设置为响应于所述操作,发出所述触发信号。The second response unit is configured to send out the trigger signal in response to the operation.
  21. 根据权利要求20所述的装置,其中,所述开关为以下至少之一:按键开关单元,触摸开关单元,光电开关单元。The device according to claim 20, wherein the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
  22. 根据权利要求17所述的装置,其中,所述装置还包括:The device according to claim 17, wherein the device further comprises:
    第三接收单元,设置为在接收到所述触发信号之前,接收用户在软件界面上的操作;The third receiving unit is configured to receive the user's operation on the software interface before receiving the trigger signal;
    第三响应单元,设置为响应于所述操作,通过网络向所述摄像装置发出所述触发信号。The third response unit is configured to send the trigger signal to the camera device via the network in response to the operation.
  23. 根据权利要求20所述的装置,其中,所述装置还包括:The device according to claim 20, wherein the device further comprises:
    第三获取单元,设置为在接收用户在软件界面上的操作之前,通过扫描设置在所述摄像装置上的图形化码获取所述摄像装置的标识信息;The third obtaining unit is configured to obtain the identification information of the camera device by scanning the graphical code set on the camera device before receiving the user's operation on the software interface;
    显示单元,设置为根据所述标识信息在所述软件界面上显示能够对所述摄像装置所进行的操作。The display unit is configured to display the operations that can be performed on the camera device on the software interface according to the identification information.
  24. 根据权利要求20所述的装置,其中,所述装置还包括:The device according to claim 20, wherein the device further comprises:
    第四获取单元,设置为在接收用户在软件界面上的操作之前,在所述软件界面显示所述人的手持设备的情况下,获取所述手持设备的地理位置信息;The fourth acquiring unit is configured to acquire geographic location information of the handheld device when the software interface displays the human handheld device before receiving the user's operation on the software interface;
    显示单元,设置为根据所述地理位置信息在所述软件界面上显示距离所述手持设备预定范围内的所述人能够控制的所述摄像装置,以及能过对所述摄像装置所进行的操作。A display unit configured to display on the software interface according to the geographic location information the camera device that can be controlled by the person within a predetermined range from the handheld device, and the operations that can be performed on the camera device .
  25. 根据权利要求17所述的装置,其中,所述装置还包括:The device according to claim 17, wherein the device further comprises:
    第二获取单元,设置为在从所述视频中提取所述视频中的人所对应的身份标识信息,并将所述身份标识信息与所述视频的对应关系进行保存之后,获取待提取视频的身份识别信息,并在已经保存的视频中查找与所述待提取视频的身份识别信息所对应的一个或多个视频;The second acquiring unit is configured to, after extracting the identification information corresponding to the person in the video from the video, and saving the corresponding relationship between the identification information and the video, acquire the information of the video to be extracted Identification information, and search for one or more videos corresponding to the identification information of the video to be extracted in the saved videos;
    显示单元,设置为将所述一个或多个视频显示给所述待提取视频的身份识别信息所对应的用户。The display unit is configured to display the one or more videos to the user corresponding to the identification information of the video to be extracted.
  26. 根据权利要求17至25中任一项所述的装置,其中,所述提取单元,设置为从所述人中识别出所述人身上的附着物和/或所述人的生物特征;将所述附着物的特征信息和/或所述生物特征的特征信息作为用于标识该人的身份标识信息;The device according to any one of claims 17 to 25, wherein the extraction unit is configured to identify the attachments on the person and/or the biological characteristics of the person from the person; The characteristic information of the attachment and/or the characteristic information of the biological characteristic as the identification information for identifying the person;
    所述第二获取单元,设置为获取所述人的附着物的特征信息和/或所述人的生物特征的身份标识信息,将所述附着物的特征信息和/或所述生物特征对应的特征信息确定为所述人的身份标识信息;其中,所述附着物包括以下至少之一:服装、饰品、手持物品;所述生物特征包括以下至少之一:面部特征,体态特征;所述附着物用于在预定区域唯一标识所述人物。The second acquiring unit is configured to acquire the characteristic information of the attachment of the person and/or the identification information of the biological characteristic of the person, and compare the characteristic information of the attachment and/or the biological characteristic to the corresponding The characteristic information is determined as the identification information of the person; wherein, the attachment includes at least one of the following: clothing, accessories, and hand-held objects; the biological characteristic includes at least one of the following: facial characteristics, posture characteristics; the attachment The object is used to uniquely identify the person in a predetermined area.
  27. 根据权利要求26所述的装置,其中,所述提取单元,还设置为从射频信号中提取感应到的射频标识信息;将所述射频标识信息作为用于标识该人的身份标识信息;从网络触发信号中提取得到标识该人的身份标识信息;The device according to claim 26, wherein the extraction unit is further configured to extract the sensed radio frequency identification information from the radio frequency signal; use the radio frequency identification information as the identification information for identifying the person; The identification information that identifies the person is extracted from the trigger signal;
    所述第二获取单元,还设置为获取所述射频信号中提取感应到的射频标识信息,将所述射频标识信息确定为所述人的身份标识信息;从所述网络触发信号中提取得到标识该人的身份标识信息。The second acquisition unit is further configured to extract the sensed radio frequency identification information from the radio frequency signal, and determine the radio frequency identification information as the identification information of the person; and extract the identification information from the network trigger signal. The identification information of the person.
  28. 根据权利要求27所述的装置,其中,在所述视频中的人的数量为多个的情况下,所述提取单元包括:The device according to claim 27, wherein, in a case where the number of people in the video is multiple, the extracting unit comprises:
    第一识别模块,设置为从所述多个人中识别出每一个人身上的附着物和/或生物特征;The first recognition module is configured to recognize attachments and/or biological characteristics of each person from among the plurality of persons;
    第一保存模块,设置为将所述每一个人身上的附着物的特征信息和/或生物特征对应的特征信息确定为所述多个人中每一个人的身份标识信息,并将所述多个人中每一个人的身份识别信息与所述视频的对应关系进行保存。The first storage module is configured to determine the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each of the plurality of persons, and to determine the characteristic information of each of the plurality of persons The corresponding relationship between each individual's identification information and the video is stored.
  29. 根据权利要求28所述的装置,其中,所述第一保存模块包括:The device according to claim 28, wherein the first storage module comprises:
    第一确定子模块,设置为确定所述多个人中每一个人的身份识别信息在所述视频中被识别出的时间节点;The first determining submodule is configured to determine the time node at which the identification information of each of the plurality of people is recognized in the video;
    第二确定子模块,设置为将所述时间节点作为所述多个人中每一个人的身份识别信息的时间标签;The second determining sub-module is configured to use the time node as the time label of the identification information of each of the plurality of people;
    保存子模块,设置为将所述多个人中每一个人的身份识别信息以及为所述多个人中每一个人的身份识别信息添加的时间标签与所述视频的对应关系进行保存。The saving sub-module is configured to save the identification information of each of the plurality of people and the corresponding relationship between the time stamp added to the identification information of each of the plurality of people and the video.
  30. 根据权利要求17所述的装置,其中,所述摄像头的移动轨迹包括以下至少之一:预定起点和预定终点之间的往返移动轨迹,预定路径的循环移动轨迹,基于预定编程程序设计的移动轨迹,跟随目标对象的跟踪移动轨迹。The device according to claim 17, wherein the movement trajectory of the camera comprises at least one of the following: a reciprocating movement trajectory between a predetermined starting point and a predetermined end point, a cyclic movement trajectory of a predetermined path, and a movement trajectory designed based on a predetermined programming program , Follow the tracking movement trajectory of the target object.
  31. 根据权利要求30所述的装置,其中,所述摄像头的移动方式为以下至少之一:轨道式移动,旋转式移动。The device according to claim 30, wherein the movement of the camera is at least one of the following: orbital movement and rotational movement.
  32. 根据权利要求17所述的装置,其中,所述驱动装置的驱动方式为以下至少之一:机械驱动,电磁驱动,压力驱动。The device according to claim 17, wherein the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving.
  33. 一种存储介质,其特征在于,所述存储介质包括存储的程序,其中,所述程序执行权利要求1至16中任意一项所述的视频处理方法。A storage medium, characterized in that the storage medium includes a stored program, wherein the program executes the video processing method according to any one of claims 1 to 16.
  34. 一种处理器,其特征在于,所述处理器用于运行程序,其中,所述程序运行时执行权利要求1至16中任意一项所述的视频处理方法。A processor, characterized in that the processor is used to run a program, wherein the video processing method according to any one of claims 1 to 16 is executed when the program is running.
PCT/CN2020/134645 2019-12-27 2020-12-08 Video processing method and device WO2021129382A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911380674.8A CN113055586A (en) 2019-12-27 2019-12-27 Video processing method and device
CN201911380674.8 2019-12-27

Publications (1)

Publication Number Publication Date
WO2021129382A1 true WO2021129382A1 (en) 2021-07-01

Family

ID=76506813

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134645 WO2021129382A1 (en) 2019-12-27 2020-12-08 Video processing method and device

Country Status (2)

Country Link
CN (1) CN113055586A (en)
WO (1) WO2021129382A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113617698A (en) * 2021-08-20 2021-11-09 杭州海康机器人技术有限公司 Package tracing method, device and system, electronic equipment and storage medium
CN114247125A (en) * 2021-12-29 2022-03-29 尚道科技(深圳)有限公司 Method and system for recording score data in sports area based on identification module

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105159959A (en) * 2015-08-20 2015-12-16 广东欧珀移动通信有限公司 Image file processing method and system
CN105279273A (en) * 2015-10-28 2016-01-27 广东欧珀移动通信有限公司 Photo classification method and device
WO2016209517A1 (en) * 2015-06-25 2016-12-29 Intel Corporation Techniques to save or delete a video clip
CN106412429A (en) * 2016-09-30 2017-02-15 深圳前海弘稼科技有限公司 Image processing method and device based on greenhouse
CN106559654A (en) * 2016-11-18 2017-04-05 广州炫智电子科技有限公司 A kind of recognition of face monitoring collection system and its control method
CN109905595A (en) * 2018-06-20 2019-06-18 成都市喜爱科技有限公司 A kind of method, apparatus, equipment and medium shot and play
CN111368724A (en) * 2020-03-03 2020-07-03 成都市喜爱科技有限公司 Amusement image generation method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581062B (en) * 2014-12-26 2019-03-15 中通服公众信息产业股份有限公司 A kind of video monitoring method and system of identity information and video interlink
DE102017001879A1 (en) * 2017-02-27 2018-08-30 Giesecke+Devrient Mobile Security Gmbh Method for verifying the identity of a user
CN108388672B (en) * 2018-03-22 2020-11-10 西安艾润物联网技术服务有限责任公司 Video searching method and device and computer readable storage medium
CN110532432A (en) * 2019-08-21 2019-12-03 深圳供电局有限公司 A kind of personage's trajectory retrieval method and its system, computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016209517A1 (en) * 2015-06-25 2016-12-29 Intel Corporation Techniques to save or delete a video clip
CN105159959A (en) * 2015-08-20 2015-12-16 广东欧珀移动通信有限公司 Image file processing method and system
CN105279273A (en) * 2015-10-28 2016-01-27 广东欧珀移动通信有限公司 Photo classification method and device
CN106412429A (en) * 2016-09-30 2017-02-15 深圳前海弘稼科技有限公司 Image processing method and device based on greenhouse
CN106559654A (en) * 2016-11-18 2017-04-05 广州炫智电子科技有限公司 A kind of recognition of face monitoring collection system and its control method
CN109905595A (en) * 2018-06-20 2019-06-18 成都市喜爱科技有限公司 A kind of method, apparatus, equipment and medium shot and play
CN111368724A (en) * 2020-03-03 2020-07-03 成都市喜爱科技有限公司 Amusement image generation method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113617698A (en) * 2021-08-20 2021-11-09 杭州海康机器人技术有限公司 Package tracing method, device and system, electronic equipment and storage medium
CN113617698B (en) * 2021-08-20 2022-12-06 杭州海康机器人股份有限公司 Package tracing method, device and system, electronic equipment and storage medium
CN114247125A (en) * 2021-12-29 2022-03-29 尚道科技(深圳)有限公司 Method and system for recording score data in sports area based on identification module

Also Published As

Publication number Publication date
CN113055586A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
WO2021129382A1 (en) Video processing method and device
CN106843460B (en) Multiple target position capture positioning system and method based on multi-cam
CN202008670U (en) Angle-adjustable face recognition equipment
CN107660039B (en) A kind of lamp control system of identification dynamic gesture
CN110533553B (en) Service providing method and device
US20160086023A1 (en) Apparatus and method for controlling presentation of information toward human object
CN107027014A (en) A kind of intelligent optical projection system of trend and its method
CN108053523A (en) A kind of efficient wisdom managing caller service system and its method of work
CN101520838A (en) Automatic-tracking and automatic-zooming method for acquiring iris images
KR20150021526A (en) Self learning face recognition using depth based tracking for database generation and update
CN105335750A (en) Client identity identification system and identification method
CN206575538U (en) A kind of intelligent projection display system of trend
CN201904848U (en) Tracking and shooting device based on multiple cameras
CN106514671A (en) Intelligent doorman robot
CN103802111A (en) Chess playing robot
CN107471215A (en) A kind of queuing seat robot control system based on information identification
CN108875716A (en) A kind of human body motion track trace detection camera system
CN205408002U (en) All -round automatic device of shooing in scenic spot
CN105894573A (en) 3D imaging fitting room and imaging method thereof
CN205845105U (en) A kind of for the virtual virtual reality space running fix device seeing room
CN201919082U (en) Lock tracking video camera
CN208117873U (en) A kind of gate sentry robot
CN206907094U (en) Intelligent multidimensional personnel information acquisition system
KR101358064B1 (en) Method for remote controlling using user image and system of the same
Liu et al. Benchmarking human activity recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20906912

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20906912

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07/12/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20906912

Country of ref document: EP

Kind code of ref document: A1