WO2020001016A1 - Moving image generation method and apparatus, and electronic device and computer-readable storage medium - Google Patents

Moving image generation method and apparatus, and electronic device and computer-readable storage medium Download PDF

Info

Publication number
WO2020001016A1
WO2020001016A1 PCT/CN2019/073077 CN2019073077W WO2020001016A1 WO 2020001016 A1 WO2020001016 A1 WO 2020001016A1 CN 2019073077 W CN2019073077 W CN 2019073077W WO 2020001016 A1 WO2020001016 A1 WO 2020001016A1
Authority
WO
WIPO (PCT)
Prior art keywords
distance coefficient
target object
moving image
coefficient
distance
Prior art date
Application number
PCT/CN2019/073077
Other languages
French (fr)
Chinese (zh)
Inventor
李旭刚
冯宇飞
柳杨光
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Publication of WO2020001016A1 publication Critical patent/WO2020001016A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to a method, a device, an electronic device, and a computer-readable storage medium for generating a moving image.
  • the application range of smart terminals has been widely improved, such as listening to music, playing games, chatting on the Internet, and taking photos through the smart terminal.
  • the camera pixels have reached more than 10 million pixels, which has higher resolution and is comparable to that of professional cameras.
  • Current mobile terminals can identify and collect user actions and form moving images.
  • the inventor has found through research that at present, when recording the motion trajectory of an object, the method of calculating the trajectory of a certain characteristic of the object is generally used.
  • the contour range can be expressed by a polygon or a circle.
  • the distance of an object can be expressed by the area of a polygon. The larger the area, the closer it is, and the smaller the area, the farther it is.
  • the area will change, it will cause the motion trajectory or speed of the object to change, making the image look uneven.
  • an embodiment of the present disclosure provides a method for generating a moving image, which is used to smooth a moving trajectory or a moving speed of an object in the image so that the image looks smoother.
  • an embodiment of the present disclosure provides a method for generating a moving image, including: using an image sensor to detect a target object; identifying an outer frame of the target object; using one or more sides of the outer frame to calculate the target object and A distance coefficient between image sensors; using the distance coefficient to generate a moving image of the target object.
  • the step of calculating the distance coefficient includes: calculating a current distance coefficient between the target object and the image sensor using one or more side lengths of the outer frame, and calculating a distance coefficient using the current distance coefficient and the historical distance coefficient.
  • the historical distance coefficient is one of the following: the distance coefficient between the target object and the image sensor in the previous image frame; the distance coefficient between the target object and the image sensor at the last moment; and the current The average of multiple distance coefficients between the target object and the image sensor in multiple times or multiple frames of images before the time.
  • the calculating the distance coefficient using the current distance coefficient and the historical distance coefficient includes: multiplying the current distance coefficient by a first weight coefficient to obtain a first weight distance coefficient; multiplying the historical distance coefficient by a second weight coefficient to obtain a second Weighted distance coefficient; add the first weighted distance coefficient and the second weighted distance coefficient to obtain the distance coefficient.
  • the generating a moving image of the target object using the distance coefficient includes generating a moving image of the target object using multiple distance coefficients.
  • the target object is a palm
  • the outer frame is a smallest rectangle covering the palm.
  • the length of the long side of the rectangle is L
  • the length of the wide side is W
  • the distance coefficient is:
  • calculate the distance coefficient using the current distance coefficient and the historical distance coefficient including:
  • f ′ (x) represents the distance coefficient
  • f (x n ) represents the current distance coefficient
  • f (x n-1 ) represents the previous moment Distance factor
  • detecting the target object using the image sensor includes: obtaining color information of the image and position information of the color information using the image sensor; comparing the color information with preset palm color information; identifying the first color information, The error between the first color information and the preset palm color information is less than a first threshold; and the position information of the first color information is used to form a contour of the palm.
  • an embodiment of the present disclosure provides a moving image generating device including: a detection module for detecting a target object using an image sensor; a recognition module for identifying an outer frame of the target object; a distance coefficient calculation module for A distance coefficient between the target object and the image sensor is calculated by using one or more side lengths of the outer frame; an image generating module is configured to use the distance coefficient to generate a moving image of the target object.
  • the distance coefficient calculation module includes: a first distance coefficient calculation module for calculating a current distance coefficient between the target object and the image sensor by using one or more side lengths of the outer frame; a second distance coefficient The calculation module is configured to calculate the distance coefficient by using the current distance coefficient and the historical distance coefficient.
  • the historical distance coefficient is one of the following: the distance coefficient between the target object and the image sensor in the previous image frame; the distance coefficient between the target object and the image sensor at the last moment; and the current The average of multiple distance coefficients between the target object and the image sensor in multiple times or multiple frames of images before the time.
  • the second distance coefficient calculation module includes: a first weighted distance coefficient calculation module, configured to multiply the current distance coefficient by the first weighted coefficient to obtain a first weighted distance coefficient; a second weighted distance coefficient calculation module, The second distance coefficient is obtained by multiplying the historical distance coefficient by the second weight coefficient.
  • the third distance coefficient calculation module is configured to add the first weight distance coefficient and the second weight distance coefficient to obtain the distance coefficient.
  • the image generating module is configured to generate a moving image of the target object using multiple distance coefficients.
  • the target object is a palm
  • the outer frame is a smallest rectangle covering the palm.
  • the length of the long side of the rectangle is L
  • the length of the wide side is W
  • the distance coefficient is:
  • calculate the distance coefficient using the current distance coefficient and the historical distance coefficient including:
  • f ′ (x) represents the distance coefficient
  • f (x n ) represents the current distance coefficient
  • f (x n-1 ) represents the previous moment Distance factor
  • the detection module includes: an information acquisition module for acquiring color information of the image and position information of the color information using an image sensor; a comparison module for comparing the color information with preset palm color information; A recognition module, configured to recognize first color information, and an error between the first color information and the preset palm color information is less than a first threshold; a contour forming module, configured to form position information using the first color information Palm silhouette.
  • an electronic device including:
  • At least one processor At least one processor
  • a memory connected in communication with the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can perform any one of the foregoing movements of the first aspect Image generation method.
  • an embodiment of the present disclosure provides a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to cause a computer to execute any one of the foregoing first aspects.
  • the moving image generating method is not limited to:
  • Embodiments of the present disclosure provide a method, an apparatus, an electronic device, and a computer-readable storage medium for generating a moving image.
  • the moving image generating method includes: detecting a target object using an image sensor; identifying an outer frame of the target object; using one or more side lengths of the outer frame to calculate a distance coefficient between the target object and the image sensor; The distance coefficient generates a moving image of the target object.
  • FIG. 1 is a flowchart of Embodiment 1 of a moving image generating method according to an embodiment of the present disclosure
  • FIG. 2 is a flowchart of a second embodiment of a moving image generating method according to an embodiment of the present disclosure
  • FIG. 3 is a flowchart of a third embodiment of a moving image generating method according to an embodiment of the present disclosure
  • Embodiments 1, 2, and 3 of a moving image generating device according to an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of a moving image generating terminal according to an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of Embodiment 1 of a moving image generating method according to an embodiment of the present disclosure.
  • the moving image generating method provided in this embodiment may be executed by a moving image generating device, and the moving image generating device may be implemented as software. Or implemented as a combination of software and hardware, the moving image generating device may be integrated in a certain device in the image processing system, such as an image processing server or an image processing terminal device.
  • the core idea of this embodiment is: use the side length of the outer frame of the target object to indicate the distance between the target object and the image sensor, and use this distance to generate a moving image of the target object.
  • the method includes the following steps:
  • S101 Use an image sensor to detect a target object.
  • the target object may be any object that can be recognized by the image sensor, such as a tree, an animal, a person, or a part of the entire object, such as a human face, a human hand, and the like.
  • Detecting a target object includes locating feature points of the target object and identifying the target object.
  • Feature points are points in the image that have distinct characteristics and can effectively reflect the essential characteristics of the image and can identify the target object in the image. If the target object is a human face, then the key points of the face need to be obtained. If the target image is a house, then the key points of the house need to be obtained. Take the human face as an example to illustrate the acquisition of key points.
  • the face contour mainly includes 5 parts: eyebrows, eyes, nose, mouth and cheeks, and sometimes also includes pupils and nostrils. Generally speaking, a complete description of the face contour is achieved. The number of key points required is about 60.
  • Face keypoint extraction on the image is equivalent to finding the corresponding position coordinates of each face contour keypoint in the face image, that is, keypoint positioning. This process needs to be performed based on the characteristics of the keypoint corresponding. After clearly identifying the image features of the key points, a search and comparison is performed in the image based on this feature to accurately locate the positions of the key points on the image.
  • the feature points occupy only a very small area in the image (usually only a few to tens of pixels in size), the area occupied by the features corresponding to the feature points on the image is also usually very limited and local.
  • the features currently used There are two ways of extraction: (1) one-dimensional range image feature extraction along the vertical contour; (2) two-dimensional range image feature extraction of square neighborhood of feature points.
  • ASM and AAM methods statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on.
  • the number of key points, accuracy, and speed used by the above various implementation methods are different, which are suitable for different application scenarios.
  • the same principle can be used to identify target objects.
  • the position where the target object exists is found from the image collected by the image sensor and the target object is segmented from the background.
  • the position where the target object exists can be located using color, and the target object is determined by color. Rough matching; feature extraction and recognition of target object images found and segmented.
  • a polygon is delineated outside the outer contour of the target object.
  • the polygon is actually any shape such as a circle, but it is preferable to easily calculate the area or the side length or perimeter.
  • a rectangle is taken as an example.
  • the width at the widest point and the length at the longest point of the target object can be calculated, and the rectangle of the outer border of the target object can be identified with the width and length.
  • An implementation of calculating the longest and widest points of the target object is to extract the boundary feature points of the target object, calculate the difference between the X coordinates of the two boundary feature points with the furthest X coordinate distances, and calculate Y as the width of the rectangle.
  • the outer frame may be set to the smallest circle covering the fist, so that the side length of the outer frame may be the radius or perimeter of the circle.
  • the distance coefficient between the target object and the image sensor is calculated using one or more sides of the outer frame, and the distance coefficient indicates the distance between the target object and the image sensor.
  • the outer frame is rectangular
  • the wide or long sides of the rectangle may be used to calculate the distance coefficient between the target object and the image sensor, or the sum of the wide and long sides of the rectangle may be used to calculate Because the distance coefficient is linear, the distance coefficient is calculated linearly, and no jump occurs.
  • the side length or the sum of the side lengths of the outer frame can directly represent the distance between the target object and the image sensor, and can also participate in the calculation of the distance as a distance coefficient, that is, the distance and the side length of the outer frame are constant.
  • the function relationship is specifically what kind of function relationship is satisfied. The user can customize or use the function preset in the system. Each function relationship can present different moving image effects.
  • the distance coefficient between the target object and the image sensor is calculated based on identifying the outer frame of the target object based on the side length of the outer frame. Because the edge length of the outer frame changes linearly with distance, it can better reflect the motion trajectory of the object, avoiding the jump caused by using the outer frame area to indicate the distance in the prior art, and making the image jump.
  • FIG. 2 is a flowchart of a second embodiment of a moving image generating method according to an embodiment of the present disclosure. As shown in FIG. 2, the method may include the following steps:
  • S201 Use an image sensor to detect a target object
  • a historical distance coefficient is added, and the historical distance coefficient is added to the current distance coefficient to calculate the actual distance coefficient to be used.
  • the system needs to set a cache to cache at least one historical distance coefficient, that is, after the distance coefficient is calculated, it is immediately sent to the corresponding cache for future distance coefficient calculation.
  • the historical distance coefficient is a distance coefficient between a target object and an image sensor in a previous image frame, and a distance calculation frequency is calculated once for each frame of the image; or the historical distance coefficient is a target at a previous moment The distance coefficient between the object and the image sensor.
  • the last time can be the last calculated time or a user-defined time, such as 1 second. This time should be set before the method runs to tell Which distance coefficients the system needs to save; or the historical distance coefficient is the average of multiple distance coefficients before the current time.
  • the calculation of the average value may be an absolute bisection of the values, or an average value with weights.
  • the calculating the distance coefficient by using the current distance coefficient and the historical distance coefficient is specifically: multiplying the current distance coefficient by the first weight coefficient ⁇ to obtain the first weight distance coefficient; and multiplying the historical distance coefficient by the second weight coefficient.
  • the above-mentioned first weight coefficient and second weight coefficient can be preset or customized. Among the customized weight coefficients, coefficient combinations can be preset, and these coefficient combinations can achieve a predetermined motion effect or can be completely customized. The user can Use a slider to adjust the weight coefficient.
  • the first weight coefficient When the slider is at the leftmost end, the first weight coefficient is 1 and the second weight coefficient is 0. When the slider is at the center, the first weight coefficient and the second weight coefficient are both 0.5. When the slider is at the far right end, the first weight coefficient is 0 and the second weight coefficient is 1.
  • the user can freely control the value of the weight coefficient through the slider, and at the same time can view the moving image preview generated by the standard image, In order to easily see the effect of the weight coefficient.
  • the distances between the target object and the image sensor in multiple times or multiple image frames are calculated using multiple distance coefficients to generate a continuous moving image of the target object.
  • calculating the distance coefficient between the target object and the image sensor by using one or more side lengths of the outer frame includes: calculating the target object and the image by using one or more side lengths of the outer frame.
  • the current distance coefficient between the sensors uses the current distance coefficient and the historical distance coefficient to calculate the distance coefficient; in this embodiment, the weight of the current distance coefficient and the historical distance coefficient when calculating the distance coefficient can be adjusted, or The historical distance coefficient calculation method is controlled to achieve different moving image effects.
  • FIG. 3 is a flowchart of Embodiment 3 of a moving image generating method according to an embodiment of the present disclosure. As shown in FIG. 3, the method may include the following steps:
  • the limited application scenario is that the user moves the palm forward and backward relative to the camera in front of the camera.
  • color features can be used to locate the palm position, segment the palm from the background, and perform feature extraction and recognition on the palm image found and segmented.
  • the image sensor is used to obtain the color information of the image and the position information of the color information; compare the color information with preset palm color information; identify the first color information, and the first color information and the preset color information The error of the palm color information is less than the first threshold; the position information of the first color information is used to form the contour of the palm.
  • the image data of the RGB color space collected by the image sensor can be mapped to the HSV color space, and the information in the HSV color space is used as contrast information.
  • the HSV color space is used.
  • the hue value in the color information is used as the color information, the hue information is least affected by the brightness, and the interference of the brightness can be well filtered.
  • the method for feature points can use the method described in the first embodiment or any suitable feature point extraction method in the prior art. limit.
  • the distance coefficient can be calculated using the following function:
  • This function is a power function, and the power exponent is less than 1, so its value becomes larger with the value of x, but the rate of change becomes smaller and smaller.
  • the value of a determines the rate of change of x, and the value of a can be customized by the user, and the implementation of the slider in the second embodiment can also be used, which is not repeated here.
  • the above function is only an example. In practical applications, other suitable functions may be used instead of the above functions. In one implementation, multiple functions and parameters corresponding to the functions may be preset in the system. , The user can select the corresponding function and adjust the corresponding parameters to achieve different moving image effects
  • the distance coefficient is calculated using the current distance coefficient and the historical distance coefficient:
  • f ′ (x) represents the distance coefficient
  • f (x n ) represents the current distance coefficient
  • f (x n-1 ) represents the previous moment Distance factor
  • the target object is the palm, which is applied to the scene where the user moves the palm back and forth in front of the camera, which can bring different motion effects to the palm motion of the user.
  • the embodiment uses a palm to describe the technical solution, it can be understood that the target object may be any other object, and the technical solution in this embodiment may be used to generate a moving image.
  • FIG. 4 is a schematic structural diagram of a first embodiment of an image cropping device according to an embodiment of the present disclosure. As shown in FIG. 4, the device includes a detection module 41, a recognition module 42, a distance coefficient calculation module 43, and an image generation module 44.
  • a detection module 41 configured to detect a target object using an image sensor
  • An identification module 42 for identifying an outer frame of the target object
  • a distance coefficient calculation module 43 is configured to calculate a distance coefficient between a target object and an image sensor by using one or more side lengths of the outer frame;
  • An image generating module 44 is configured to generate a moving image of the target object using the distance coefficient.
  • the apparatus shown in FIG. 4 can execute the method of the embodiment shown in FIG. 1.
  • the module performs the following steps:
  • a detection module 41 configured to detect a target object using an image sensor
  • An identification module 42 for identifying an outer frame of the target object
  • the distance coefficient calculation module 43 calculates a current distance coefficient between the target object and the image sensor by using one or more sides of the outer frame, and calculates a distance coefficient by using the current distance coefficient and the historical distance coefficient;
  • An image generating module 44 is configured to generate a moving image of the target object using the distance coefficient.
  • the distance coefficient calculation module 43 includes:
  • a first distance coefficient calculation module configured to calculate a current distance coefficient between a target object and an image sensor by using one or more side lengths of the outer frame
  • the second distance coefficient calculation module is configured to calculate the distance coefficient by using the current distance coefficient and the historical distance coefficient.
  • the second distance coefficient calculation module includes:
  • a first weighted distance coefficient calculation module configured to multiply the current distance coefficient by the first weighted coefficient to obtain a first weighted distance coefficient
  • a second weighted distance coefficient calculation module configured to multiply the historical distance coefficient by the second weighted coefficient to obtain a second weighted distance coefficient
  • a third distance coefficient calculation module is configured to add the first weighted distance coefficient and the second weighted distance coefficient to obtain a distance coefficient.
  • the device in Embodiment 2 may execute the method in the embodiment shown in FIG. 2.
  • the parts that are not described in detail in this embodiment reference may be made to the related description of the embodiment shown in FIG. 2.
  • the implementation process and technical effect of the technical solution refer to the description in the embodiment shown in FIG. 2, and details are not described herein again.
  • the module performs the following steps:
  • An identification module 42 for identifying a smallest rectangle covering the palm
  • a distance coefficient calculation module 43 configured to calculate a distance coefficient between the palm and the image sensor by using one or more sides of the minimum rectangle
  • An image generating module 44 is configured to generate a moving image of the target object using the distance coefficient.
  • the detection module 41 includes:
  • An information acquisition module configured to acquire color information of an image and position information of the color information using an image sensor
  • a comparison module configured to compare the color information with preset palm color information
  • a recognition module configured to recognize first color information, and an error between the first color information and the preset palm color information is less than a first threshold
  • a contour forming module is configured to form a contour of a palm using position information of the first color information.
  • the moving image generating device in this embodiment may execute the method in the embodiment shown in FIG. 3.
  • the parts that are not described in detail in this embodiment reference may be made to the related description of the embodiment shown in FIG. 3.
  • the implementation process and technical effect of the technical solution refer to the description in the embodiment shown in FIG. 3, and details are not described herein again.
  • FIG. 5 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure. As shown in FIG. 5, the electronic device 50 according to an embodiment of the present disclosure includes a memory 51 and a processor 52.
  • the memory 51 is configured to store non-transitory computer-readable instructions.
  • the memory 51 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory.
  • the volatile memory may include, for example, a random access memory (RAM) and / or a cache memory.
  • the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
  • the processor 52 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the electronic device 50 to perform desired functions.
  • the processor 52 is configured to run the computer-readable instructions stored in the memory 51 so that the electronic device 50 executes all or part of the foregoing moving image generating method of the embodiments of the present disclosure. step.
  • this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present disclosure. within.
  • FIG. 6 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
  • a computer-readable storage medium 60 according to an embodiment of the present disclosure has non-transitory computer-readable instructions 61 stored thereon.
  • the non-transitory computer-readable instructions 61 are executed by a processor, all or part of the steps of the foregoing moving image generating method of the embodiments of the present disclosure are performed.
  • the computer-readable storage medium 60 includes, but is not limited to, optical storage media (for example, CD-ROM and DVD), magneto-optical storage media (for example, MO), magnetic storage media (for example, magnetic tape or mobile hard disk), Non-volatile memory rewritable media (for example: memory card) and media with built-in ROM (for example: ROM box).
  • optical storage media for example, CD-ROM and DVD
  • magneto-optical storage media for example, MO
  • magnetic storage media for example, magnetic tape or mobile hard disk
  • Non-volatile memory rewritable media for example: memory card
  • media with built-in ROM for example: ROM box
  • FIG. 7 is a schematic diagram illustrating a hardware structure of a terminal device according to an embodiment of the present disclosure. As shown in FIG. 7, the moving image generating terminal 70 includes the foregoing moving image generating device embodiment.
  • the terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), Mobile terminal devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like, and fixed terminal devices such as digital TVs, desktop computers, and the like.
  • PMPs portable multimedia players
  • navigation devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like
  • fixed terminal devices such as digital TVs, desktop computers, and the like.
  • the terminal may further include other components.
  • the multi-channel audio processing terminal 70 may include a power supply unit 71, a wireless communication unit 72, an A / V (audio / video) input unit 73, a user input unit 74, a sensing unit 75, and an interface unit 76 , Controller 75, output unit 78, memory 79, and so on.
  • FIG. 7 shows a terminal having various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the wireless communication unit 72 allows radio communication between the terminal 70 and a wireless communication system or network.
  • the A / V input unit 73 is used to receive audio or video signals.
  • the user input unit 74 may generate key input data according to a command input by the user to control various operations of the terminal device.
  • the sensing unit 75 detects the current state of the terminal 70, the position of the terminal 70, the presence or absence of a user's touch input to the terminal 70, the orientation of the terminal 70, the acceleration or deceleration movement and direction of the terminal 70, and the like, and generates a control for the terminal Command or signal of operation of 70.
  • the interface unit 76 functions as an interface through which at least one external device can connect with the terminal 70.
  • the output unit 78 is configured to provide an output signal in a visual, audio, and / or tactile manner.
  • the memory 79 may store software programs and the like for processing and control operations performed by the controller 75, or may temporarily store data that has been output or is to be output.
  • the memory 79 may include at least one type of storage medium.
  • the terminal 70 can cooperate with a network storage device that performs a storage function of the memory 79 through a network connection.
  • the controller 75 generally controls the overall operation of the terminal device.
  • the controller 75 may include a multimedia module for reproducing or playing back multimedia data.
  • the controller 75 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 71 receives external power or internal power under the control of the controller 75 and provides appropriate power required to operate each element and component.
  • Various embodiments of the moving image generation method proposed by the present disclosure may be implemented using a computer-readable medium such as computer software, hardware, or any combination thereof.
  • various embodiments of the moving image generation method proposed by the present disclosure can be implemented by using an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), and a programmable logic device (PLD ), A field programmable gate array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and, in some cases, this
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • DSPD digital signal processing device
  • PLD programmable logic device
  • FPGA field programmable gate array
  • processor a controller
  • microcontroller a microcontroller
  • microprocessor an electronic unit designed to perform the functions described herein, and, in some cases, this
  • this Various embodiments of the publicly proposed moving image generation method may be implemented in the controller 75
  • various embodiments of the moving image generation method proposed by the present disclosure may be implemented with a separate software module that allows execution of at least one function or operation.
  • the software codes may be implemented by a software application (or program) written in any suitable programming language, and the software codes may be stored in the memory 79 and executed by the controller 75.
  • an "or” used in an enumeration of items beginning with “at least one” indicates a separate enumeration such that, for example, an "at least one of A, B, or C” enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C).
  • the word "exemplary” does not mean that the described example is preferred or better than other examples.
  • each component or each step can be disassembled and / or recombined.
  • These decompositions and / or recombinations should be considered as equivalent solutions of the present disclosure.

Abstract

A moving image generation method and apparatus, and an electronic device and a computer-readable storage medium. The moving image generation method comprises: detecting a target object using an image sensor (S101); identifying an outer frame of the target object (S102); using one or more side lengths of the outer frame to calculate a distance coefficient between the target object and the image sensor (S103); and using the distance coefficient to generate a moving image of the target object (S104). The problem of a jumping phenomenon occurring when the image area is used to determine a movement trajectory of a target object in the prior art is solved, thereby making a moving image smoother.

Description

运动图像生成方法、装置、电子设备及计算机可读存储介质Moving image generating method, device, electronic device and computer-readable storage medium
交叉引用cross reference
本公开引用于2018年06月29日递交的名称为“运动图像生成方法、装置、电子设备及计算机可读存储介质”的、申请号为201810699053.5的中国专利申请,其通过引用被全部并入本申请。This disclosure refers to a Chinese patent application with the application number 201810699053.5, which is filed on June 29, 2018 and is entitled "Motion image generation method, device, electronic device, and computer-readable storage medium", which is incorporated by reference in its entirety Application.
技术领域Technical field
本公开涉及图像处理领域,尤其涉及一种运动图像生成方法、装置、电子设备及计算机可读存储介质。The present disclosure relates to the field of image processing, and in particular, to a method, a device, an electronic device, and a computer-readable storage medium for generating a moving image.
背景技术Background technique
随着计算机技术的发展,智能终端的应用范围得到了广泛的提高,例如可以通过智能终端听音乐、玩游戏、上网聊天和拍照等。对于智能终端的拍照技术来说,其拍照像素已经达到千万像素以上,具有较高的清晰度和媲美专业相机的拍照效果。With the development of computer technology, the application range of smart terminals has been widely improved, such as listening to music, playing games, chatting on the Internet, and taking photos through the smart terminal. For the smart terminal's camera technology, the camera pixels have reached more than 10 million pixels, which has higher resolution and is comparable to that of professional cameras.
目前的移动终端可以对用户的动作进行识别采集,并形成运动图像。Current mobile terminals can identify and collect user actions and form moving images.
发明内容Summary of the invention
然而发明人经研究发现,目前在记录物体运动轨迹时,一般采用计算物体某一特征的运动轨迹的方法,比如需要计算物体从远到近或者从近到远的运动速度,则首先识别出物体的轮廓范围,一般该轮廓范围可以使用多边形或者圆形等表示,物体的远近可以使用多边形的面积来表示,面积越大表示越近,面积越小表示越远,但是在记录物体的运动时,由于面积会产生跃变,因此导致物体的运动轨迹或者速度产生跃变,使图像看起来不平滑。However, the inventor has found through research that at present, when recording the motion trajectory of an object, the method of calculating the trajectory of a certain characteristic of the object is generally used. Generally, the contour range can be expressed by a polygon or a circle. The distance of an object can be expressed by the area of a polygon. The larger the area, the closer it is, and the smaller the area, the farther it is. However, when recording the motion of an object, Because the area will change, it will cause the motion trajectory or speed of the object to change, making the image look uneven.
因此,如果能提供一种运动图像的生成方法,使图像中物体的运动轨迹和运动速度不发生跃变,可以大大提高图像的拍摄效果和用户的体验。Therefore, if a method for generating a moving image can be provided so that the moving trajectory and the moving speed of an object in the image do not change suddenly, the shooting effect of the image and the user experience can be greatly improved.
有鉴于此,本公开实施例提供一种运动图像生成方法,用以对图像的中的物体的运动轨迹或运动速度进行平滑,使图像看起来更加流畅。In view of this, an embodiment of the present disclosure provides a method for generating a moving image, which is used to smooth a moving trajectory or a moving speed of an object in the image so that the image looks smoother.
第一方面,本公开实施例提供一种运动图像生成方法,包括:使用图像传感 器检测目标物体;识别所述目标物体的外框;利用所述外框的一个或多个边长计算目标物体和图像传感器之间的距离系数;使用所述距离系数生成所述目标物体的运动图像。In a first aspect, an embodiment of the present disclosure provides a method for generating a moving image, including: using an image sensor to detect a target object; identifying an outer frame of the target object; using one or more sides of the outer frame to calculate the target object and A distance coefficient between image sensors; using the distance coefficient to generate a moving image of the target object.
可选的,所述计算距离系数的步骤包括:利用所述外框的一个或多个边长计算目标物体和图像传感器之间的当前距离系数,利用当前距离系数和历史距离系数计算距离系数。Optionally, the step of calculating the distance coefficient includes: calculating a current distance coefficient between the target object and the image sensor using one or more side lengths of the outer frame, and calculating a distance coefficient using the current distance coefficient and the historical distance coefficient.
可选的,所述历史距离系数为下述项中的一项:上一图像帧中目标物体与图像传感器之间的距离系数;上一时刻目标物体与图像传感器之间的距离系数;以及当前时刻之前的多个时刻或多帧图像中目标物体与图像传感器之间的多个距离系数的平均值。Optionally, the historical distance coefficient is one of the following: the distance coefficient between the target object and the image sensor in the previous image frame; the distance coefficient between the target object and the image sensor at the last moment; and the current The average of multiple distance coefficients between the target object and the image sensor in multiple times or multiple frames of images before the time.
可选的,所述使用当前距离系数以及历史距离系数计算距离系数,包括:将当前距离系数乘以第一权重系数得到第一权重距离系数;将历史距离系数乘以第二权重系数得到第二权重距离系数;将第一权重距离系数与第二权重距离系数相加,得到距离系数。Optionally, the calculating the distance coefficient using the current distance coefficient and the historical distance coefficient includes: multiplying the current distance coefficient by a first weight coefficient to obtain a first weight distance coefficient; multiplying the historical distance coefficient by a second weight coefficient to obtain a second Weighted distance coefficient; add the first weighted distance coefficient and the second weighted distance coefficient to obtain the distance coefficient.
可选的,所述使用该距离系数生成所述目标物体的运动图像,包括:使用多个距离系数生成所述目标物体的运动图像。Optionally, the generating a moving image of the target object using the distance coefficient includes generating a moving image of the target object using multiple distance coefficients.
可选的,所述目标物体为手掌,所述外框为覆盖手掌的最小矩形。Optionally, the target object is a palm, and the outer frame is a smallest rectangle covering the palm.
可选的,所述矩形的长边长度为L,宽边长度为W,所述距离系数为:Optionally, the length of the long side of the rectangle is L, the length of the wide side is W, and the distance coefficient is:
f(x)=Ax a f (x) = Ax a
其中x=L+W,A为大于0的实数,0<a<1。Where x = L + W, A is a real number greater than 0, and 0 <a <1.
可选的,使用当前距离系数以及历史距离系数计算距离系数,包括:Optionally, calculate the distance coefficient using the current distance coefficient and the historical distance coefficient, including:
f′(x)=αf(x n)+βf(x n-1) f '(x) = αf ( x n) + βf (x n-1)
其中α>0,β>0,且α+β=1,n≥2;f′(x)表示距离系数,f(x n)表示当前距离系数,f(x n-1)表示上一时刻的距离系数。 Where α> 0, β> 0, and α + β = 1, n≥2; f ′ (x) represents the distance coefficient, f (x n ) represents the current distance coefficient, and f (x n-1 ) represents the previous moment Distance factor.
可选的,所述使用图像传感器检测目标物体,包括:使用图像传感器获取图像的颜色信息以及颜色信息的位置信息;将所述颜色信息与预设的手掌颜色信息对比;识别第一颜色信息,所述第一颜色信息与所述预设的手掌颜色信息的误差小于第一阈值;利用所述第一颜色信息的位置信息形成手掌的轮廓。Optionally, detecting the target object using the image sensor includes: obtaining color information of the image and position information of the color information using the image sensor; comparing the color information with preset palm color information; identifying the first color information, The error between the first color information and the preset palm color information is less than a first threshold; and the position information of the first color information is used to form a contour of the palm.
第二方面,本公开实施例提供一种运动图像生成装置,包括:检测模块,用于使用图像传感器检测目标物体;识别模块,用于识别所述目标物体的外框;距离系数计算模块,用于利用所述外框的一个或多个边长计算目标物体和图像传 感器之间的距离系数;图像生成模块,用于使用所述距离系数生成所述目标物体的运动图像。In a second aspect, an embodiment of the present disclosure provides a moving image generating device including: a detection module for detecting a target object using an image sensor; a recognition module for identifying an outer frame of the target object; a distance coefficient calculation module for A distance coefficient between the target object and the image sensor is calculated by using one or more side lengths of the outer frame; an image generating module is configured to use the distance coefficient to generate a moving image of the target object.
可选的,所述距离系数计算模块包括:第一距离系数计算模块,用于利用所述外框的一个或多个边长计算目标物体和图像传感器之间的当前距离系数;第二距离系数计算模块,用于利用当前距离系数和历史距离系数计算所述距离系数。Optionally, the distance coefficient calculation module includes: a first distance coefficient calculation module for calculating a current distance coefficient between the target object and the image sensor by using one or more side lengths of the outer frame; a second distance coefficient The calculation module is configured to calculate the distance coefficient by using the current distance coefficient and the historical distance coefficient.
可选的,所述历史距离系数为下述项中的一项:上一图像帧中目标物体与图像传感器之间的距离系数;上一时刻目标物体与图像传感器之间的距离系数;以及当前时刻之前的多个时刻或多帧图像中目标物体与图像传感器之间的多个距离系数的平均值。Optionally, the historical distance coefficient is one of the following: the distance coefficient between the target object and the image sensor in the previous image frame; the distance coefficient between the target object and the image sensor at the last moment; and the current The average of multiple distance coefficients between the target object and the image sensor in multiple times or multiple frames of images before the time.
可选的,所述第二距离系数计算模块包括:第一权重距离系数计算模块,用于将当前距离系数乘以第一权重系数得到第一权重距离系数;第二权重距离系数计算模块,用于将历史距离系数乘以第二权重系数得到第二权重距离系数;第三距离系数计算模块,用于将第一权重距离系数与第二权重距离系数相加,得到距离系数。Optionally, the second distance coefficient calculation module includes: a first weighted distance coefficient calculation module, configured to multiply the current distance coefficient by the first weighted coefficient to obtain a first weighted distance coefficient; a second weighted distance coefficient calculation module, The second distance coefficient is obtained by multiplying the historical distance coefficient by the second weight coefficient. The third distance coefficient calculation module is configured to add the first weight distance coefficient and the second weight distance coefficient to obtain the distance coefficient.
可选的,所述图像生成模块:用于使用多个距离系数生成所述目标物体的运动图像。Optionally, the image generating module is configured to generate a moving image of the target object using multiple distance coefficients.
可选的,所述目标物体为手掌,所述外框为覆盖手掌的最小矩形。Optionally, the target object is a palm, and the outer frame is a smallest rectangle covering the palm.
可选的,所述矩形的长边长度为L,宽边长度为W,所述距离系数为:Optionally, the length of the long side of the rectangle is L, the length of the wide side is W, and the distance coefficient is:
f(x)=Ax a f (x) = Ax a
其中x=L+W,A为大于0的实数,0<a<1。Where x = L + W, A is a real number greater than 0, and 0 <a <1.
可选的,使用当前距离系数以及历史距离系数计算距离系数,包括:Optionally, calculate the distance coefficient using the current distance coefficient and the historical distance coefficient, including:
f′(x)=αf(x n)+βf(x n-1) f '(x) = αf ( x n) + βf (x n-1)
其中α>0,β>0,且α+β=1,n≥2;f′(x)表示距离系数,f(x n)表示当前距离系数,f(x n-1)表示上一时刻的距离系数。 Where α> 0, β> 0, and α + β = 1, n≥2; f ′ (x) represents the distance coefficient, f (x n ) represents the current distance coefficient, and f (x n-1 ) represents the previous moment Distance factor.
可选的,所述检测模块包括:信息获取模块,用于使用图像传感器获取图像的颜色信息以及颜色信息的位置信息;对比模块,用于将所述颜色信息与预设的手掌颜色信息对比;识别模块,用于识别第一颜色信息,所述第一颜色信息与所述预设的手掌颜色信息的误差小于第一阈值;轮廓形成模块,用于利用所述第一颜色信息的位置信息形成手掌的轮廓。Optionally, the detection module includes: an information acquisition module for acquiring color information of the image and position information of the color information using an image sensor; a comparison module for comparing the color information with preset palm color information; A recognition module, configured to recognize first color information, and an error between the first color information and the preset palm color information is less than a first threshold; a contour forming module, configured to form position information using the first color information Palm silhouette.
第三方面,本公开实施例提供一种电子设备,包括:According to a third aspect, an embodiment of the present disclosure provides an electronic device, including:
至少一个处理器;以及,At least one processor; and
与所述至少一个处理器通信连接的存储器;其中,A memory connected in communication with the at least one processor; wherein,
所述存储器存储有能被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行前述第一方面中的任一所述运动图像生成方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can perform any one of the foregoing movements of the first aspect Image generation method.
第四方面,本公开实施例提供一种非暂态计算机可读存储介质,其中该非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行前述第一方面中的任一所述运动图像生成方法。According to a fourth aspect, an embodiment of the present disclosure provides a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to cause a computer to execute any one of the foregoing first aspects. The moving image generating method.
本公开实施例提供一种运动图像生成方法、装置、电子设备和计算机可读存储介质。其中该运动图像生成方法包括:使用图像传感器检测目标物体;识别所述目标物体的外框;利用所述外框的一个或多个边长计算目标物体和图像传感器之间的距离系数;使用所述距离系数生成所述目标物体的运动图像。本公开实施例通过采取该技术方案,解决了现有技术中使用图像面积确定目标物体运动轨迹时的跃变现象,使运动图像更加平滑。Embodiments of the present disclosure provide a method, an apparatus, an electronic device, and a computer-readable storage medium for generating a moving image. The moving image generating method includes: detecting a target object using an image sensor; identifying an outer frame of the target object; using one or more side lengths of the outer frame to calculate a distance coefficient between the target object and the image sensor; The distance coefficient generates a moving image of the target object. By adopting this technical solution, the embodiment of the present disclosure solves the jump phenomenon in determining the motion trajectory of the target object using the image area in the prior art, and makes the moving image smoother.
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。The above description is only an overview of the technical solutions of the present disclosure. In order to better understand the technical means of the present disclosure, it can be implemented in accordance with the contents of the description, and in order to make the above and other objects, features, and advantages of the present disclosure more understandable The following describes the preferred embodiments in detail with reference to the drawings, as follows.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions in the embodiments of the present disclosure or the prior art more clearly, the drawings used in the description of the embodiments or the prior art will be briefly introduced below. Obviously, the drawings in the following description These are some embodiments of the present disclosure. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without paying creative labor.
图1为本公开实施例提供的运动图像生成方法实施例一的流程图;FIG. 1 is a flowchart of Embodiment 1 of a moving image generating method according to an embodiment of the present disclosure;
图2为本公开实施例提供的运动图像生成方法实施例二的流程图;2 is a flowchart of a second embodiment of a moving image generating method according to an embodiment of the present disclosure;
图3为本公开实施例提供的运动图像生成方法实施例三的流程图;3 is a flowchart of a third embodiment of a moving image generating method according to an embodiment of the present disclosure;
图4为本公开实施例提供的运动图像生成装置实施例一、二、三的结构示意图;4 is a schematic structural diagram of Embodiments 1, 2, and 3 of a moving image generating device according to an embodiment of the present disclosure;
图5为根据本公开实施例提供的电子设备的结构示意图;5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
图6为根据本公开实施例提供的计算机可读存储介质的结构示意图;6 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure;
图7为根据本公开实施例提供的运动图像生成终端的结构示意图。FIG. 7 is a schematic structural diagram of a moving image generating terminal according to an embodiment of the present disclosure.
具体实施方式detailed description
以下通过特定的具体实例说明本公开的实施方式,本领域技术人员可由本 说明书所揭露的内容轻易地了解本公开的其他优点与功效。显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。本公开还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本公开的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。The following describes the implementation of the present disclosure through specific specific examples. Those skilled in the art can easily understand other advantages and effects of the present disclosure from the content disclosed in this specification. Obviously, the described embodiments are only a part of the embodiments of the present disclosure, but not all the embodiments. The present disclosure can also be implemented or applied through other different specific implementations, and various details in this specification can also be modified or changed based on different viewpoints and applications without departing from the spirit of the present disclosure. It should be noted that, in the case of no conflict, the following embodiments and features in the embodiments can be combined with each other. Based on the embodiments in the present disclosure, all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present disclosure.
需要说明的是,下文描述在所附权利要求书的范围内的实施例的各种方面。应显而易见,本文中所描述的方面可体现于广泛多种形式中,且本文中所描述的任何特定结构及/或功能仅为说明性的。基于本公开,所属领域的技术人员应了解,本文中所描述的一个方面可与任何其它方面独立地实施,且可以各种方式组合这些方面中的两者或两者以上。举例来说,可使用本文中所阐述的任何数目个方面来实施设备及/或实践方法。另外,可使用除了本文中所阐述的方面中的一或多者之外的其它结构及/或功能性实施此设备及/或实践此方法。It should be noted that various aspects of the embodiments within the scope of the appended claims are described below. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and / or function described herein is merely illustrative. Based on the present disclosure, those skilled in the art should understand that one aspect described herein may be implemented independently of any other aspect, and that two or more of these aspects may be combined in various ways. For example, any number of the aspects set forth herein may be used to implement a device and / or a practice method. In addition, the apparatus and / or the method may be implemented using other structures and / or functionality in addition to one or more of the aspects set forth herein.
还需要说明的是,以下实施例中所提供的图示仅以示意方式说明本公开的基本构想,图式中仅显示与本公开中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。It should also be noted that the illustrations provided in the following embodiments only illustrate the basic idea of the present disclosure in a schematic manner, and the drawings only show components related to the present disclosure and not the number, shape, For size drawing, the type, quantity, and proportion of each component can be changed at will in actual implementation, and the component layout type may be more complicated.
另外,在以下描述中,提供具体细节是为了便于透彻理解实例。然而,所属领域的技术人员将理解,可在没有这些特定细节的情况下实践所述方面。In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, those skilled in the art will understand that the described aspects may be practiced without these specific details.
图1为本公开实施例提供的运动图像生成方法实施例一的流程图,本实施例提供的该运动图像生成方法可以由一运动图像生成装置来执行,该运动图像生成装置可以实现为软件,或者实现为软件和硬件的组合,该运动图像生成装置可以集成设置在图像处理系统中的某设备中,比如图像处理服务器或者图像处理终端设备中。本实施例的核心思想是:使用目标物体的外框边长来表示目标物体和图像传感器之间的距离,使用这个距离生成目标物体的运动图像。FIG. 1 is a flowchart of Embodiment 1 of a moving image generating method according to an embodiment of the present disclosure. The moving image generating method provided in this embodiment may be executed by a moving image generating device, and the moving image generating device may be implemented as software. Or implemented as a combination of software and hardware, the moving image generating device may be integrated in a certain device in the image processing system, such as an image processing server or an image processing terminal device. The core idea of this embodiment is: use the side length of the outer frame of the target object to indicate the distance between the target object and the image sensor, and use this distance to generate a moving image of the target object.
如图1所示,该方法包括如下步骤:As shown in Figure 1, the method includes the following steps:
S101,使用图像传感器检测目标物体;S101. Use an image sensor to detect a target object.
在该实施例中,所述的目标物体可以是任意可以被图像传感器识别的物体,比如树木、动物、人物,也可以是整个物体的一部分,比如人脸、人手等等。In this embodiment, the target object may be any object that can be recognized by the image sensor, such as a tree, an animal, a person, or a part of the entire object, such as a human face, a human hand, and the like.
检测目标物体包括定位目标物体的特征点并识别出目标物体。特征点是指图像中具有鲜明特性并能够有效反映图像本质特征能够标识图像中目标物体的点。如果目标物体为人脸,那么就需要获取人脸关键点,如果目标图像为一栋房 子,那么就需要获取房子的关键点。以人脸为例说明关键点的获取方法,人脸轮廓主要包括眉毛、眼睛、鼻子、嘴巴和脸颊5个部分,有时还会包括瞳孔和鼻孔,一般来说实现对人脸轮廓较为完整的描述,需要关键点的个数在60个左右,如果只描述基本结构,不需要对各部位细节进行详细描述,或不需要描述脸颊,则可以相应降低关键点数目,如果需要描述瞳孔、鼻孔或者需要更细节的五官特征,则可以增加关键点的数目。在图像上进行人脸关键点提取,相当于寻找每个人脸轮廓关键点在人脸图像中的对应位置坐标,即关键点定位,这一过程需要基于关键点对应的特征进行,在获得了能够清晰标识关键点的图像特征之后,依据此特征在图像中进行搜索比对,在图像上精确定位关键点的位置。由于特征点在图像中仅占据非常小的面积(通常只有几个至几十个像素的大小),特征点对应的特征在图像上所占据的区域通常也是非常有限和局部的,目前用的特征提取方式有两种:(1)沿轮廓垂向的一维范围图像特征提取;(2)特征点方形邻域的二维范围图像特征提取。上述两种方式有很多种实现方法,如ASM和AAM类方法、统计能量函数类方法、回归分析方法、深度学习方法、分类器方法、批量提取方法等等。上述各种实现方法所使用的关键点个数,准确度以及速度各不相同,适用于不同的应用场景。同样的,对于其他的目标物体,也可以使用同样的原理来识别目标物体。Detecting a target object includes locating feature points of the target object and identifying the target object. Feature points are points in the image that have distinct characteristics and can effectively reflect the essential characteristics of the image and can identify the target object in the image. If the target object is a human face, then the key points of the face need to be obtained. If the target image is a house, then the key points of the house need to be obtained. Take the human face as an example to illustrate the acquisition of key points. The face contour mainly includes 5 parts: eyebrows, eyes, nose, mouth and cheeks, and sometimes also includes pupils and nostrils. Generally speaking, a complete description of the face contour is achieved. The number of key points required is about 60. If only the basic structure is described, the details of each part do not need to be described in detail, or the cheeks need not be described, then the number of key points can be reduced accordingly. If you need to describe the pupils, nostrils, or need More detailed features can increase the number of key points. Face keypoint extraction on the image is equivalent to finding the corresponding position coordinates of each face contour keypoint in the face image, that is, keypoint positioning. This process needs to be performed based on the characteristics of the keypoint corresponding. After clearly identifying the image features of the key points, a search and comparison is performed in the image based on this feature to accurately locate the positions of the key points on the image. Because the feature points occupy only a very small area in the image (usually only a few to tens of pixels in size), the area occupied by the features corresponding to the feature points on the image is also usually very limited and local. The features currently used There are two ways of extraction: (1) one-dimensional range image feature extraction along the vertical contour; (2) two-dimensional range image feature extraction of square neighborhood of feature points. There are many ways to implement the above two methods, such as ASM and AAM methods, statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on. The number of key points, accuracy, and speed used by the above various implementation methods are different, which are suitable for different application scenarios. Similarly, for other target objects, the same principle can be used to identify target objects.
在该实施例中,从图像传感器采集到的图像中找到目标物体存在的位置并将目标物体从背景中分割出来,此处找到目标物体存在的位置可以使用颜色来定位,通过颜色对目标物体进行粗略匹配;对找到和分割出的目标物体图像进行特征提取与识别。In this embodiment, the position where the target object exists is found from the image collected by the image sensor and the target object is segmented from the background. Here, the position where the target object exists can be located using color, and the target object is determined by color. Rough matching; feature extraction and recognition of target object images found and segmented.
S102,识别所述目标物体的外框;S102. Identify the outer frame of the target object.
检测到目标物体之后,在目标物体的外轮廓之外圈定一个多边形,该实施例中为多边形,实际上也可以是圆形等任意形状,但是优选容易计算面积或者容易计算边长或周长的形状,在此以矩形为例,在识别出目标物体的特征点之后,可以计算目标物体最宽处的宽度以及最长处的长度,以该宽度和长度识别出目标物体的外边框矩形。计算目标物体的最长处和最宽处的一个实现方式为,提取目标物体的边界特征点,计算X坐标距离最远的两个边界特征点的X坐标之差,作为矩形宽的长度,计算Y坐标距离最远的两个边界特征点的Y坐标之差,作为矩形长的长度。如果目标物体是拳头,可以设定外框为覆盖拳头的最小圆形,这样外框的边长可以是所述圆形的半径或者周长。After the target object is detected, a polygon is delineated outside the outer contour of the target object. In this embodiment, the polygon is actually any shape such as a circle, but it is preferable to easily calculate the area or the side length or perimeter. For the shape, a rectangle is taken as an example. After the feature points of the target object are identified, the width at the widest point and the length at the longest point of the target object can be calculated, and the rectangle of the outer border of the target object can be identified with the width and length. An implementation of calculating the longest and widest points of the target object is to extract the boundary feature points of the target object, calculate the difference between the X coordinates of the two boundary feature points with the furthest X coordinate distances, and calculate Y as the width of the rectangle. The difference between the Y coordinates of the two boundary feature points with the furthest coordinate distance is taken as the length of the rectangle. If the target object is a fist, the outer frame may be set to the smallest circle covering the fist, so that the side length of the outer frame may be the radius or perimeter of the circle.
S103,利用所述外框的一个或多个边长计算目标物体和图像传感器之间的距离系数;S103. Calculate a distance coefficient between the target object and the image sensor by using one or more sides of the outer frame.
S104,使用所述距离系数生成所述目标物体的运动图像。S104. Use the distance coefficient to generate a moving image of the target object.
在该实施例中,使用上述外框的一个或多个边长计算目标物体与图像传感 器之间的距离系数,所述距离系数表示目标物体与图像传感器之间的距离远近程度。具体的,当所述外框为矩形时,可以使用矩形的宽边或者长边来计算目标物体与图像传感器之间的距离系数,也可以使用所述矩形的宽边和长边之和来计算所述距离系数,由于边长的变化是线性的,所以所述距离系数计算出来也是线性的,不会发生跃变。其中所述外框的边长或者边长之和可以直接表示目标物体与图像传感器之间的距离,也可以作为距离系数参与所述距离的计算,即所述距离和外框的边长呈一定的函数关系,具体满足何种函数关系,用户可以自定义或者使用系统中预设的函数,每种函数关系可以呈现不同的运动图像效果。In this embodiment, the distance coefficient between the target object and the image sensor is calculated using one or more sides of the outer frame, and the distance coefficient indicates the distance between the target object and the image sensor. Specifically, when the outer frame is rectangular, the wide or long sides of the rectangle may be used to calculate the distance coefficient between the target object and the image sensor, or the sum of the wide and long sides of the rectangle may be used to calculate Because the distance coefficient is linear, the distance coefficient is calculated linearly, and no jump occurs. The side length or the sum of the side lengths of the outer frame can directly represent the distance between the target object and the image sensor, and can also participate in the calculation of the distance as a distance coefficient, that is, the distance and the side length of the outer frame are constant. The function relationship is specifically what kind of function relationship is satisfied. The user can customize or use the function preset in the system. Each function relationship can present different moving image effects.
上述实施例中的技术方案,通过识别目标物体外框,以外框的边长为基础计算目标物体到图像传感器之间的距离系数。由于外框边长随远近变化呈线性,因此能更好的反映物体的运动轨迹,避免了现有技术中使用外框面积表示远近程度带来的跃变,使图像发生跳动。In the technical solution in the foregoing embodiment, the distance coefficient between the target object and the image sensor is calculated based on identifying the outer frame of the target object based on the side length of the outer frame. Because the edge length of the outer frame changes linearly with distance, it can better reflect the motion trajectory of the object, avoiding the jump caused by using the outer frame area to indicate the distance in the prior art, and making the image jump.
图2为本公开实施例提供的一种运动图像生成方法实施例二的流程图,如图2所示,可以包括如下步骤:FIG. 2 is a flowchart of a second embodiment of a moving image generating method according to an embodiment of the present disclosure. As shown in FIG. 2, the method may include the following steps:
S201,使用图像传感器检测目标物体;S201: Use an image sensor to detect a target object;
S202,识别所述目标物体的外框;S202. Identify the outer frame of the target object.
S203,利用所述外框的一个或多个边长计算目标物体和图像传感器之间的当前距离系数,利用当前距离系数和历史距离系数计算距离系数;S203. Calculate a current distance coefficient between the target object and the image sensor by using one or more side lengths of the outer frame, and calculate a distance coefficient by using the current distance coefficient and the historical distance coefficient.
S204,使用该距离系数生成所述目标物体的运动图像。S204. Use the distance coefficient to generate a moving image of the target object.
在该实施例中,为了使图像的运动更加平滑,加入了历史距离系数,将历史距离系数加入当前距离系数计算出实际要使用的距离系数。为此,系统需要设置一个缓存来缓存至少一个历史距离系数,即当距离系数在计算完成之后,立即被送入对应的缓存中以备之后的距离系数计算使用。In this embodiment, in order to make the motion of the image smoother, a historical distance coefficient is added, and the historical distance coefficient is added to the current distance coefficient to calculate the actual distance coefficient to be used. To this end, the system needs to set a cache to cache at least one historical distance coefficient, that is, after the distance coefficient is calculated, it is immediately sent to the corresponding cache for future distance coefficient calculation.
在一个实现方式中,所述历史距离系数为上一图像帧中目标物体与图像传感器之间的距离系数,距离的计算频率为每帧图像计算一次;或者所述历史距离系数为上一时刻目标物体与图像传感器之间的距离系数,所述的上一时刻可以是上一计算时刻,也可以是用户自定义的时刻,比如上1秒钟,该时间应该在该方法运行前设置,以告诉系统需要保存哪些距离系数;或者,所述历史距离系数为当前时刻之前多个距离系数的平均值。所述平均值的计算,可以是数值的绝对平分,也可以是带权重的平均值,在此说明带权重的平均值的一种实现方式,假如有5个历史距离系数,这5个历史距离系数按照时间排序可以组成一个历史距离系数向量
Figure PCTCN2019073077-appb-000001
设置平滑矩阵,该平滑矩阵为一个时间权重系数的向量
Figure PCTCN2019073077-appb-000002
则将
Figure PCTCN2019073077-appb-000003
Figure PCTCN2019073077-appb-000004
做卷积计算,得到多个距离系数的平均值,该平均值中,5个历史距离系数,距离当前时间越近的距离系数的权重越大,这样计算出来的历史距离系数 平滑度更好并且更接近真实的历史距离系数。
In an implementation manner, the historical distance coefficient is a distance coefficient between a target object and an image sensor in a previous image frame, and a distance calculation frequency is calculated once for each frame of the image; or the historical distance coefficient is a target at a previous moment The distance coefficient between the object and the image sensor. The last time can be the last calculated time or a user-defined time, such as 1 second. This time should be set before the method runs to tell Which distance coefficients the system needs to save; or the historical distance coefficient is the average of multiple distance coefficients before the current time. The calculation of the average value may be an absolute bisection of the values, or an average value with weights. Here is an implementation method of the average value with weights. If there are 5 historical distance coefficients, these 5 historical distances The coefficients are sorted by time to form a historical distance coefficient vector.
Figure PCTCN2019073077-appb-000001
Sets the smoothing matrix, which is a vector of time weight coefficients
Figure PCTCN2019073077-appb-000002
Then
Figure PCTCN2019073077-appb-000003
with
Figure PCTCN2019073077-appb-000004
The convolution calculation is performed to obtain the average value of multiple distance coefficients. Among the average values, 5 historical distance coefficients have a greater weight as the distance coefficients closer to the current time, so that the calculated historical distance coefficients are smoother and Closer to the true historical distance coefficient.
在一个实现方式中,所述利用当前距离系数和历史距离系数计算距离系数具体为,将当前距离系数乘以第一权重系数α得到第一权重距离系数;将历史距离系数乘以第二权重系数β得到第二权重距离系数;将第一权重距离系数与第二权重距离系数相加,得到距离系数,其中α+β=1,且α>0,β>0。上述第一权重系数和第二权重系数可以预设,也可以自定义,其中自定义权重系数中可以预设得到系数组合,这些系数组合可以实现预定的运动效果,也可以完全自定义,用户可以使用一个滑块调节权重系数,当滑块位于最左端时,第一权重系数为1,第二权重系数为0,当滑块位于中心时,第一权重系数和第二权重系数均为0.5,当滑块位于最右端时,第一权重系数为0,第二权重系数为1,用户通过滑块可以自由控制权重系数的值,并且在调节的同时可以观看通过标准图像生成的运动图像预览,以方便查看权重系数带来的效果。In an implementation manner, the calculating the distance coefficient by using the current distance coefficient and the historical distance coefficient is specifically: multiplying the current distance coefficient by the first weight coefficient α to obtain the first weight distance coefficient; and multiplying the historical distance coefficient by the second weight coefficient. β obtains the second weighted distance coefficient; adding the first weighted distance coefficient and the second weighted distance coefficient to obtain the distance coefficient, where α + β = 1, and α> 0, β> 0. The above-mentioned first weight coefficient and second weight coefficient can be preset or customized. Among the customized weight coefficients, coefficient combinations can be preset, and these coefficient combinations can achieve a predetermined motion effect or can be completely customized. The user can Use a slider to adjust the weight coefficient. When the slider is at the leftmost end, the first weight coefficient is 1 and the second weight coefficient is 0. When the slider is at the center, the first weight coefficient and the second weight coefficient are both 0.5. When the slider is at the far right end, the first weight coefficient is 0 and the second weight coefficient is 1. The user can freely control the value of the weight coefficient through the slider, and at the same time can view the moving image preview generated by the standard image, In order to easily see the effect of the weight coefficient.
在得到距离系数之后,使用多个距离系数计算多个时刻或者多个图像帧中目标物体到图像传感器的距离,生成连续的目标物体的运动图像。After obtaining the distance coefficient, the distances between the target object and the image sensor in multiple times or multiple image frames are calculated using multiple distance coefficients to generate a continuous moving image of the target object.
在该实施例中,所述利用所述外框的一个或多个边长计算目标物体和图像传感器之间的距离系数包括:利用所述外框的一个或多个边长计算目标物体和图像传感器之间的当前距离系数,利用当前距离系数和历史距离系数计算距离系数;在该实施例中,可以对当前距离系数和历史距离系数在计算距离系数时所占的权重进行调节,也可以对历史距离系数计算方式进行控制,以达到不同的运动图像效果。In this embodiment, calculating the distance coefficient between the target object and the image sensor by using one or more side lengths of the outer frame includes: calculating the target object and the image by using one or more side lengths of the outer frame. The current distance coefficient between the sensors uses the current distance coefficient and the historical distance coefficient to calculate the distance coefficient; in this embodiment, the weight of the current distance coefficient and the historical distance coefficient when calculating the distance coefficient can be adjusted, or The historical distance coefficient calculation method is controlled to achieve different moving image effects.
图3为本公开实施例提供的运动图像生成方法实施例三的流程图,如图3所示,可以包括如下步骤:FIG. 3 is a flowchart of Embodiment 3 of a moving image generating method according to an embodiment of the present disclosure. As shown in FIG. 3, the method may include the following steps:
S301,使用图像传感器检测手掌;S301. Use an image sensor to detect the palm;
S302,识别覆盖所述手掌的最小矩形;S302. Identify the smallest rectangle covering the palm.
S303,利用所述最小矩形的一个或多个边长计算手掌和图像传感器之间的距离系数;S303. Calculate a distance coefficient between the palm and the image sensor by using one or more sides of the minimum rectangle.
S304,使用所述距离系数生成所述目标物体的运动图像。S304. Use the distance coefficient to generate a moving image of the target object.
该实施例中,限定应用场景为用户在摄像头前相对摄像头前后移动手掌。In this embodiment, the limited application scenario is that the user moves the palm forward and backward relative to the camera in front of the camera.
在识别手掌时,可以使用颜色特征定位手掌的位置,将手掌从背景中分割出来,对找到和分割出的手掌图像进行特征提取与识别。具体的,使用图像传感器获取图像的颜色信息以及颜色信息的位置信息;将所述颜色信息与预设的手掌颜色信息对比;识别第一颜色信息,所述第一颜色信息与所述预设的手掌颜色信息的误差小于第一阈值;利用所述第一颜色信息的位置信息形成手掌的轮廓。优选的,为了避免环境亮度对颜色信息的干扰,可以将图像传感器采集到的RGB颜色空间的图像数据映射到HSV颜色空间,使用HSV颜色空间中的信息作为对比 信息,优选的,将HSV颜色空间中的色调值作为颜色信息,色调信息受亮度的影响最小,可以很好的过滤亮度的干扰。使用手掌轮廓粗略确定手掌的位置,之后对手掌进行特征点提取,特征点提出的方法可以使用实施例一中描述的方法或者现有技术中任意一种合适的特征点提取方法,在此不做限制。When identifying the palm, color features can be used to locate the palm position, segment the palm from the background, and perform feature extraction and recognition on the palm image found and segmented. Specifically, the image sensor is used to obtain the color information of the image and the position information of the color information; compare the color information with preset palm color information; identify the first color information, and the first color information and the preset color information The error of the palm color information is less than the first threshold; the position information of the first color information is used to form the contour of the palm. Preferably, in order to avoid the interference of environmental brightness on color information, the image data of the RGB color space collected by the image sensor can be mapped to the HSV color space, and the information in the HSV color space is used as contrast information. Preferably, the HSV color space is used. The hue value in the color information is used as the color information, the hue information is least affected by the brightness, and the interference of the brightness can be well filtered. Use palm contours to roughly determine the position of the palm, and then extract feature points from the palm of your hand. The method for feature points can use the method described in the first embodiment or any suitable feature point extraction method in the prior art. limit.
提取出手掌图像特征点之后,获取手掌的边界特征点,计算X坐标距离最远的两个边界特征点的X坐标之差,作为矩形宽的长度,计算Y坐标距离最远的两个边界特征点的Y坐标之差,作为矩形长的长度。使用上述方法识别出覆盖手掌的最小矩形。After extracting the feature points of the palm image, obtain the boundary feature points of the palm, calculate the X coordinate difference between the two boundary feature points with the furthest X coordinate distance, and calculate the two boundary features with the furthest Y coordinate distance as the length of the rectangle. The difference between the Y coordinates of the points is taken as the length of the rectangle. Use the above method to identify the smallest rectangle covering the palm.
设所述矩形的长边长度为L,宽边长度为W,所述距离系数可以使用以下函数来计算:Let the length of the long side of the rectangle be L and the length of the wide side be W. The distance coefficient can be calculated using the following function:
f(x)=Ax a f (x) = Ax a
其中x=L+W,A>0,0<a<1。Where x = L + W, A> 0, 0 <a <1.
该函数为幂函数,且幂指数小于1,因此其值随着x值变大,但是变化率越来越小。对应到运动的手掌图像中来说,可以达到这样一种效果:手掌离镜头越近,移动的速度越快,手掌离镜头越远,移动的速度越慢。所述a的值决定了x的变化率,a的值可以由用户自定义,同样可以使用实施例二中滑块的实现方式,在此不再赘述。需要说明的是,上述函数仅仅是一个示例,在实际应用中,可以使用其他合适的函数替代上述函数,在一个实现方式中,可以在系统中预设多个函数以及对应于所述函数的参数,用户可以选择相应的函数,调节相应的参数,以达到不同的运动图像效果This function is a power function, and the power exponent is less than 1, so its value becomes larger with the value of x, but the rate of change becomes smaller and smaller. Corresponding to the moving palm image, such an effect can be achieved: the closer the palm is to the lens, the faster the movement speed, and the farther the palm is from the lens, the slower the movement speed. The value of a determines the rate of change of x, and the value of a can be customized by the user, and the implementation of the slider in the second embodiment can also be used, which is not repeated here. It should be noted that the above function is only an example. In practical applications, other suitable functions may be used instead of the above functions. In one implementation, multiple functions and parameters corresponding to the functions may be preset in the system. , The user can select the corresponding function and adjust the corresponding parameters to achieve different moving image effects
在一个实现方式中,使用当前距离系数以及历史距离系数计算距离系数:In one implementation, the distance coefficient is calculated using the current distance coefficient and the historical distance coefficient:
f′(x)=αf(x n)+βf(x n-1) f '(x) = αf ( x n) + βf (x n-1)
其中α>0,β>0,且α+β=1,n≥2;f′(x)表示距离系数,f(x n)表示当前距离系数,f(x n-1)表示上一时刻的距离系数。 Where α> 0, β> 0, and α + β = 1, n≥2; f ′ (x) represents the distance coefficient, f (x n ) represents the current distance coefficient, and f (x n-1 ) represents the previous moment Distance factor.
在该实施例中,目标物体为手掌,应用于用户在镜头前前后移动手掌的场景,可以给用户的手掌动作带来不同的动作效果。需要说明的是,虽然该实施例使用手掌来说明技术方案,但是可以理解的是,目标物体可以是任何其他物体,都可以使用该实施例中的技术方案生成运动图像。In this embodiment, the target object is the palm, which is applied to the scene where the user moves the palm back and forth in front of the camera, which can bring different motion effects to the palm motion of the user. It should be noted that although the embodiment uses a palm to describe the technical solution, it can be understood that the target object may be any other object, and the technical solution in this embodiment may be used to generate a moving image.
以下将详细描述本公开的一个或多个实施例的运动图像生成装置。本领域技术人员可以理解,这些运动图像生成装置均可使用市售的硬件组件通过本方案所教导的步骤进行配置来构成。Hereinafter, a moving image generating device of one or more embodiments of the present disclosure will be described in detail. Those skilled in the art can understand that each of these moving image generating devices can be configured by using commercially available hardware components to be configured by the steps taught in this solution.
图4为本公开实施例提供的图像裁剪装置实施例一的结构示意图,如图4所示,该装置包括:检测模块41、识别模块42、距离系数计算模块43和图像生成模块44。FIG. 4 is a schematic structural diagram of a first embodiment of an image cropping device according to an embodiment of the present disclosure. As shown in FIG. 4, the device includes a detection module 41, a recognition module 42, a distance coefficient calculation module 43, and an image generation module 44.
检测模块41,用于使用图像传感器检测目标物体;A detection module 41, configured to detect a target object using an image sensor;
识别模块42,用于识别所述目标物体的外框;An identification module 42 for identifying an outer frame of the target object;
距离系数计算模块43,用于利用所述外框的一个或多个边长计算目标物体和图像传感器之间的距离系数;A distance coefficient calculation module 43 is configured to calculate a distance coefficient between a target object and an image sensor by using one or more side lengths of the outer frame;
图像生成模块44,用于使用所述距离系数生成所述目标物体的运动图像。An image generating module 44 is configured to generate a moving image of the target object using the distance coefficient.
图4所示装置可以执行图1所示实施例的方法,本实施例未详细描述的部分,可参考对图1所示实施例的相关说明。该技术方案的执行过程和技术效果参见图1所示实施例中的描述,在此不再赘述。The apparatus shown in FIG. 4 can execute the method of the embodiment shown in FIG. 1. For the parts that are not described in detail in this embodiment, refer to the related description of the embodiment shown in FIG. 1. For the implementation process and technical effect of the technical solution, refer to the description in the embodiment shown in FIG. 1, and details are not described herein again.
在本公开实施例提供的运动图像生成装置实施例二中,在图4所示实施例基础上,所述模块执行以下步骤:In the second embodiment of the moving image generating device provided by the embodiment of the present disclosure, based on the embodiment shown in FIG. 4, the module performs the following steps:
检测模块41,用于使用图像传感器检测目标物体;A detection module 41, configured to detect a target object using an image sensor;
识别模块42,用于识别所述目标物体的外框;An identification module 42 for identifying an outer frame of the target object;
距离系数计算模块43,利用所述外框的一个或多个边长计算目标物体和图像传感器之间的当前距离系数,利用当前距离系数和历史距离系数计算距离系数;The distance coefficient calculation module 43 calculates a current distance coefficient between the target object and the image sensor by using one or more sides of the outer frame, and calculates a distance coefficient by using the current distance coefficient and the historical distance coefficient;
图像生成模块44,用于使用所述距离系数生成所述目标物体的运动图像。An image generating module 44 is configured to generate a moving image of the target object using the distance coefficient.
所述距离系数计算模块43包括:The distance coefficient calculation module 43 includes:
第一距离系数计算模块,用于利用所述外框的一个或多个边长计算目标物体和图像传感器之间的当前距离系数;A first distance coefficient calculation module, configured to calculate a current distance coefficient between a target object and an image sensor by using one or more side lengths of the outer frame;
第二距离系数计算模块,用于利用当前距离系数和历史距离系数计算所述距离系数。The second distance coefficient calculation module is configured to calculate the distance coefficient by using the current distance coefficient and the historical distance coefficient.
所述第二距离系数计算模块包括:The second distance coefficient calculation module includes:
第一权重距离系数计算模块,用于将当前距离系数乘以第一权重系数得到第一权重距离系数;A first weighted distance coefficient calculation module, configured to multiply the current distance coefficient by the first weighted coefficient to obtain a first weighted distance coefficient;
第二权重距离系数计算模块,用于将历史距离系数乘以第二权重系数得到第二权重距离系数;A second weighted distance coefficient calculation module, configured to multiply the historical distance coefficient by the second weighted coefficient to obtain a second weighted distance coefficient;
第三距离系数计算模块,用于将第一权重距离系数与第二权重距离系数相加,得到距离系数。A third distance coefficient calculation module is configured to add the first weighted distance coefficient and the second weighted distance coefficient to obtain a distance coefficient.
所述实施例2中的装置可以执行图2所示实施例的方法,本实施例未详细描述的部分,可参考对图2所示实施例的相关说明。该技术方案的执行过程和技术效果参见图2所示实施例中的描述,在此不再赘述。The device in Embodiment 2 may execute the method in the embodiment shown in FIG. 2. For the parts that are not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in FIG. 2. For the implementation process and technical effect of the technical solution, refer to the description in the embodiment shown in FIG. 2, and details are not described herein again.
在本公开实施例提供的运动图像生成装置的实施例三中,在图4所示实施例基础上,所述模块执行以下步骤:In the third embodiment of the moving image generating device provided by the embodiment of the present disclosure, based on the embodiment shown in FIG. 4, the module performs the following steps:
检测模块41,用于使用图像传感器检测手掌;A detection module 41 for detecting a palm using an image sensor;
识别模块42,用于识别覆盖所述手掌的最小矩形;An identification module 42 for identifying a smallest rectangle covering the palm;
距离系数计算模块43,用于利用所述最小矩形的一个或多个边长计算手掌 和图像传感器之间的距离系数;A distance coefficient calculation module 43 configured to calculate a distance coefficient between the palm and the image sensor by using one or more sides of the minimum rectangle;
图像生成模块44,用于使用所述距离系数生成所述目标物体的运动图像。An image generating module 44 is configured to generate a moving image of the target object using the distance coefficient.
所述检测模块41包括:The detection module 41 includes:
信息获取模块,用于使用图像传感器获取图像的颜色信息以及颜色信息的位置信息;An information acquisition module, configured to acquire color information of an image and position information of the color information using an image sensor;
对比模块,用于将所述颜色信息与预设的手掌颜色信息对比;A comparison module, configured to compare the color information with preset palm color information;
识别模块,用于识别第一颜色信息,所述第一颜色信息与所述预设的手掌颜色信息的误差小于第一阈值;A recognition module, configured to recognize first color information, and an error between the first color information and the preset palm color information is less than a first threshold;
轮廓形成模块,用于利用所述第一颜色信息的位置信息形成手掌的轮廓。A contour forming module is configured to form a contour of a palm using position information of the first color information.
该实施例中的运动图像生成装置可以执行图3所示实施例的方法,本实施例未详细描述的部分,可参考对图3所示实施例的相关说明。该技术方案的执行过程和技术效果参见图3所示实施例中的描述,在此不再赘述。The moving image generating device in this embodiment may execute the method in the embodiment shown in FIG. 3. For the parts that are not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in FIG. 3. For the implementation process and technical effect of the technical solution, refer to the description in the embodiment shown in FIG. 3, and details are not described herein again.
图5是图示根据本公开的实施例的电子设备的硬件框图。如图5所示,根据本公开实施例的电子设备50包括存储器51和处理器52。FIG. 5 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure. As shown in FIG. 5, the electronic device 50 according to an embodiment of the present disclosure includes a memory 51 and a processor 52.
该存储器51用于存储非暂时性计算机可读指令。具体地,存储器51可以包括一个或多个计算机程序产品,该计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。该易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。该非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。The memory 51 is configured to store non-transitory computer-readable instructions. Specifically, the memory 51 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory. The volatile memory may include, for example, a random access memory (RAM) and / or a cache memory. The non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
该处理器52可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制电子设备50中的其它组件以执行期望的功能。在本公开的一个实施例中,该处理器52用于运行该存储器51中存储的该计算机可读指令,使得该电子设备50执行前述的本公开各实施例的运动图像生成方法的全部或部分步骤。The processor 52 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the electronic device 50 to perform desired functions. In an embodiment of the present disclosure, the processor 52 is configured to run the computer-readable instructions stored in the memory 51 so that the electronic device 50 executes all or part of the foregoing moving image generating method of the embodiments of the present disclosure. step.
本领域技术人员应能理解,为了解决如何获得良好用户体验效果的技术问题,本实施例中也可以包括诸如通信总线、接口等公知的结构,这些公知的结构也应包含在本公开的保护范围之内。Those skilled in the art should understand that, in order to solve the technical problem of how to obtain a good user experience effect, this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present disclosure. within.
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。For detailed descriptions of this embodiment, reference may be made to corresponding descriptions in the foregoing embodiments, and details are not described herein again.
图6是图示根据本公开的实施例的计算机可读存储介质的示意图。如图6所示,根据本公开实施例的计算机可读存储介质60,其上存储有非暂时性计算机可读指令61。当该非暂时性计算机可读指令61由处理器运行时,执行前述的本公开各实施例的运动图像生成方法的全部或部分步骤。FIG. 6 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. As shown in FIG. 6, a computer-readable storage medium 60 according to an embodiment of the present disclosure has non-transitory computer-readable instructions 61 stored thereon. When the non-transitory computer-readable instructions 61 are executed by a processor, all or part of the steps of the foregoing moving image generating method of the embodiments of the present disclosure are performed.
上述计算机可读存储介质60包括但不限于:光存储介质(例如:CD-ROM和DVD)、磁光存储介质(例如:MO)、磁存储介质(例如:磁带或移动硬盘)、具有 内置的可重写非易失性存储器的媒体(例如:存储卡)和具有内置ROM的媒体(例如:ROM盒)。The computer-readable storage medium 60 includes, but is not limited to, optical storage media (for example, CD-ROM and DVD), magneto-optical storage media (for example, MO), magnetic storage media (for example, magnetic tape or mobile hard disk), Non-volatile memory rewritable media (for example: memory card) and media with built-in ROM (for example: ROM box).
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。For detailed descriptions of this embodiment, reference may be made to corresponding descriptions in the foregoing embodiments, and details are not described herein again.
图7是图示根据本公开实施例的终端设备的硬件结构示意图。如图7所示,该运动图像生成终端70包括上述运动图像生成装置实施例。FIG. 7 is a schematic diagram illustrating a hardware structure of a terminal device according to an embodiment of the present disclosure. As shown in FIG. 7, the moving image generating terminal 70 includes the foregoing moving image generating device embodiment.
该终端设备可以以各种形式来实施,本公开中的终端设备可以包括但不限于诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置、车载终端设备、车载显示终端、车载电子后视镜等等的移动终端设备以及诸如数字TV、台式计算机等等的固定终端设备。The terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), Mobile terminal devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like, and fixed terminal devices such as digital TVs, desktop computers, and the like.
作为等同替换的实施方式,该终端还可以包括其他组件。如图7所示,该多声道音频处理终端70可以包括电源单元71、无线通信单元72、A/V(音频/视频)输入单元73、用户输入单元74、感测单元75、接口单元76、控制器75、输出单元78和存储器79等等。图7示出了具有各种组件的终端,但是应理解的是,并不要求实施所有示出的组件,也可以替代地实施更多或更少的组件。As an equivalent alternative implementation, the terminal may further include other components. As shown in FIG. 7, the multi-channel audio processing terminal 70 may include a power supply unit 71, a wireless communication unit 72, an A / V (audio / video) input unit 73, a user input unit 74, a sensing unit 75, and an interface unit 76 , Controller 75, output unit 78, memory 79, and so on. FIG. 7 shows a terminal having various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
其中,无线通信单元72允许终端70与无线通信系统或网络之间的无线电通信。A/V输入单元73用于接收音频或视频信号。用户输入单元74可以根据用户输入的命令生成键输入数据以控制终端设备的各种操作。感测单元75检测终端70的当前状态、终端70的位置、用户对于终端70的触摸输入的有无、终端70的取向、终端70的加速或减速移动和方向等等,并且生成用于控制终端70的操作的命令或信号。接口单元76用作至少一个外部装置与终端70连接可以通过的接口。输出单元78被构造为以视觉、音频和/或触觉方式提供输出信号。存储器79可以存储由控制器75执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据。存储器79可以包括至少一种类型的存储介质。而且,终端70可以与通过网络连接执行存储器79的存储功能的网络存储装置协作。控制器75通常控制终端设备的总体操作。另外,控制器75可以包括用于再现或回放多媒体数据的多媒体模块。控制器75可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。电源单元71在控制器75的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。Among them, the wireless communication unit 72 allows radio communication between the terminal 70 and a wireless communication system or network. The A / V input unit 73 is used to receive audio or video signals. The user input unit 74 may generate key input data according to a command input by the user to control various operations of the terminal device. The sensing unit 75 detects the current state of the terminal 70, the position of the terminal 70, the presence or absence of a user's touch input to the terminal 70, the orientation of the terminal 70, the acceleration or deceleration movement and direction of the terminal 70, and the like, and generates a control for the terminal Command or signal of operation of 70. The interface unit 76 functions as an interface through which at least one external device can connect with the terminal 70. The output unit 78 is configured to provide an output signal in a visual, audio, and / or tactile manner. The memory 79 may store software programs and the like for processing and control operations performed by the controller 75, or may temporarily store data that has been output or is to be output. The memory 79 may include at least one type of storage medium. Moreover, the terminal 70 can cooperate with a network storage device that performs a storage function of the memory 79 through a network connection. The controller 75 generally controls the overall operation of the terminal device. In addition, the controller 75 may include a multimedia module for reproducing or playing back multimedia data. The controller 75 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image. The power supply unit 71 receives external power or internal power under the control of the controller 75 and provides appropriate power required to operate each element and component.
本公开提出的运动图像生成方法的各种实施方式可以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,本公开提出的运动图像生成方法的各种实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可 编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,本公开提出的运动图像生成方法的各种实施方式可以在控制器75中实施。对于软件实施,本公开提出的运动图像生成方法的各种实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器79中并且由控制器75执行。Various embodiments of the moving image generation method proposed by the present disclosure may be implemented using a computer-readable medium such as computer software, hardware, or any combination thereof. For hardware implementation, various embodiments of the moving image generation method proposed by the present disclosure can be implemented by using an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), and a programmable logic device (PLD ), A field programmable gate array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and, in some cases, this Various embodiments of the publicly proposed moving image generation method may be implemented in the controller 75. For software implementation, various embodiments of the moving image generation method proposed by the present disclosure may be implemented with a separate software module that allows execution of at least one function or operation. The software codes may be implemented by a software application (or program) written in any suitable programming language, and the software codes may be stored in the memory 79 and executed by the controller 75.
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。For detailed descriptions of this embodiment, reference may be made to corresponding descriptions in the foregoing embodiments, and details are not described herein again.
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。The basic principles of the present disclosure have been described above in conjunction with specific embodiments, but it should be noted that the advantages, advantages, effects, etc. mentioned in this disclosure are merely examples and not limitations, and these advantages, advantages, effects, etc. cannot be considered as Required for various embodiments of the present disclosure. In addition, the specific details of the above disclosure are merely for the purpose of illustration and ease of understanding, and are not limiting, and the above details do not limit the present disclosure to the implementation of the above specific details.
本公开中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。The block diagrams of the devices, devices, equipment, and systems involved in this disclosure are only illustrative examples and are not intended to require or imply that they must be connected, arranged, and configured in the manner shown in the block diagrams. As those skilled in the art would realize, these devices, devices, equipment, and systems may be connected, arranged, and configured in any manner. Words such as "including," "including," "having," and the like are open words, meaning "including, but not limited to," and can be used interchangeably with them. As used herein, the words "or" and "and" refer to the words "and / or" and are used interchangeably with each other, unless the context clearly indicates otherwise. The term "such as" as used herein refers to the phrase "such as, but not limited to," and is used interchangeably with it.
另外,如在此使用的,在以“至少一个”开始的项的列举中使用的“或”指示分离的列举,以便例如“A、B或C的至少一个”的列举意味着A或B或C,或AB或AC或BC,或ABC(即A和B和C)。此外,措辞“示例的”不意味着描述的例子是优选的或者比其他例子更好。In addition, as used herein, an "or" used in an enumeration of items beginning with "at least one" indicates a separate enumeration such that, for example, an "at least one of A, B, or C" enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
还需要指出的是,在本公开的系统和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。It should also be noted that, in the system and method of the present disclosure, each component or each step can be disassembled and / or recombined. These decompositions and / or recombinations should be considered as equivalent solutions of the present disclosure.
可以不脱离由所附权利要求定义的教导的技术而进行对在此所述的技术的各种改变、替换和更改。此外,本公开的权利要求的范围不限于以上所述的处理、机器、制造、事件的组成、手段、方法和动作的具体方面。可以利用与在此所述的相应方面进行基本相同的功能或者实现基本相同的结果的当前存在的或者稍后要开发的处理、机器、制造、事件的组成、手段、方法或动作。因而,所附权利要求包括在其范围内的这样的处理、机器、制造、事件的组成、手段、方法或动作。Various changes, substitutions, and alterations to the techniques described herein can be made without departing from the techniques taught by the claims defined below. Further, the scope of the claims of the present disclosure is not limited to the specific aspects of the processes, machines, manufacturing, composition of events, means, methods, and actions described above. A composition, means, method, or action of a process, machine, manufacturing, event that currently exists or is to be developed at a later time may be utilized to perform substantially the same functions or achieve substantially the same results as the corresponding aspects described herein. Accordingly, the appended claims include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or actions.
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使 用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。The above description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other aspects without departing from the scope of the present disclosure. Accordingly, the disclosure is not intended to be limited to the aspects shown herein, but to the broadest scope consistent with the principles and novel features disclosed herein.
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。The foregoing description has been given for the purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the present disclosure to the forms disclosed herein. Although a number of example aspects and embodiments have been discussed above, those skilled in the art will recognize certain variations, modifications, changes, additions and sub-combinations thereof.

Claims (12)

  1. 一种运动图像生成方法,包括:A moving image generating method includes:
    使用图像传感器检测目标物体;Use the image sensor to detect the target object;
    识别所述目标物体的外框;Identifying the outer frame of the target object;
    利用所述外框的一个或多个边长计算目标物体和图像传感器之间的距离系数;Calculating the distance coefficient between the target object and the image sensor by using one or more sides of the outer frame;
    使用所述距离系数生成所述目标物体的运动图像。A moving image of the target object is generated using the distance coefficient.
  2. 如权利要求1所述的运动图像生成方法,其中所述计算距离系数的步骤包括:The method of claim 1, wherein the step of calculating a distance coefficient comprises:
    使用所述外框的一个或多个边长计算目标物体和图像传感器之间的当前距离系数,使用当前距离系数和历史距离系数计算所述距离系数。The current distance coefficient between the target object and the image sensor is calculated using one or more side lengths of the outer frame, and the distance coefficient is calculated using the current distance coefficient and the historical distance coefficient.
  3. 如权利要求2所述的运动图像生成方法,其中所述历史距离系数为下述项中的一项:The moving image generating method according to claim 2, wherein the historical distance coefficient is one of the following items:
    上一图像帧中目标物体与图像传感器之间的距离系数;The distance coefficient between the target object and the image sensor in the previous image frame;
    上一时刻目标物体与图像传感器之间的距离系数;以及The distance coefficient between the target object and the image sensor at the last moment; and
    当前时刻之前的多个时刻或多帧图像中目标物体与图像传感器之间的多个距离系数的平均值。The average of multiple distance coefficients between the target object and the image sensor in multiple times or multiple frames of images before the current time.
  4. 如权利要求2或3所述的运动图像生成方法,其中使用当前距离系数以及历史距离系数计算距离系数,包括:The moving image generating method according to claim 2 or 3, wherein calculating the distance coefficient using the current distance coefficient and the historical distance coefficient comprises:
    将当前距离系数乘以第一权重系数得到第一权重距离系数;Multiplying the current distance coefficient by the first weight coefficient to obtain the first weight distance coefficient;
    将历史距离系数乘以第二权重系数得到第二权重距离系数;Multiplying the historical distance coefficient by the second weight coefficient to obtain a second weight distance coefficient;
    将第一权重距离系数与第二权重距离系数相加,得到距离系数。Add the first weighted distance coefficient and the second weighted distance coefficient to obtain the distance coefficient.
  5. 如权利要求4所述的运动图像生成方法,其中使用该距离系数生成所述目标物体的运动图像,包括:The moving image generating method according to claim 4, wherein generating the moving image of the target object using the distance coefficient comprises:
    使用多个距离系数生成所述目标物体的运动图像。A moving image of the target object is generated using a plurality of distance coefficients.
  6. 如权利要求1或2所述的运动图像生成方法,其中:The moving image generating method according to claim 1 or 2, wherein:
    所述目标物体为手掌,所述外框为覆盖手掌的最小矩形。The target object is a palm, and the outer frame is a smallest rectangle covering the palm.
  7. 如权利要求6所述的运动图像生成方法,其特征在于:The method for generating a moving image according to claim 6, wherein:
    所述矩形的长边长度为L,宽边长度为W,所述距离系数为:The length of the long side of the rectangle is L, the length of the wide side is W, and the distance coefficient is:
    f(x)=Ax a,其中x=L+W,A为大于0的实数,0<a<1。 f (x) = Ax a , where x = L + W, A is a real number greater than 0, and 0 <a <1.
  8. 如权利要求6所述的运动图像生成方法,其中使用当前距离系数以及历史距离系数计算距离系数,包括:The method of claim 6, wherein calculating the distance coefficient using the current distance coefficient and the historical distance coefficient comprises:
    f′(x)=αf(x n)+βf(x n-1) f '(x) = αf ( x n) + βf (x n-1)
    其中α>0,β>0,且α+β=1,n≥2;f′(x)表示距离系数,f(x n)表示当前距离系数,f(x n-1)表示上一时刻的距离系数。 Where α> 0, β> 0, and α + β = 1, n≥2; f ′ (x) represents the distance coefficient, f (x n ) represents the current distance coefficient, and f (x n-1 ) represents the previous moment Distance factor.
  9. 如权利要求6所述的运动图像生成方法,其中使用图像传感器检测目标物体,包括:The method of claim 6, wherein detecting the target object using an image sensor comprises:
    使用图像传感器获取图像的颜色信息以及颜色信息的位置信息;Use the image sensor to obtain the color information of the image and the position information of the color information;
    将所述颜色信息与预设的手掌颜色信息对比;Comparing the color information with preset palm color information;
    识别第一颜色信息,所述第一颜色信息与所述预设的手掌颜色信息的误差小于第一阈值;Identify the first color information, and an error between the first color information and the preset palm color information is less than a first threshold;
    利用所述第一颜色信息的位置信息形成手掌的轮廓。The position information of the first color information is used to form the outline of the palm.
  10. 一种运动图像生成装置,包括:A moving image generating device includes:
    检测模块,用于使用图像传感器检测目标物体;A detection module for detecting a target object using an image sensor;
    识别模块,用于识别所述目标物体的外框;A recognition module, configured to recognize an outer frame of the target object;
    距离系数计算模块,用于利用所述外框的一个或多个边长计算目标物体和图像传感器之间的距离系数;A distance coefficient calculation module, configured to calculate a distance coefficient between a target object and an image sensor by using one or more side lengths of the outer frame;
    图像生成模块,用于使用所述距离系数生成所述目标物体的运动图像。An image generating module is configured to generate a moving image of the target object using the distance coefficient.
  11. 一种电子设备,包括:An electronic device includes:
    至少一个处理器;以及,At least one processor; and
    与所述至少一个处理器通信连接的存储器;其中,A memory connected in communication with the at least one processor; wherein,
    所述存储器存储有能被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-9任一所述的运动图像生成方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can perform the motion of any one of claims 1-9. Image generation method.
  12. 一种非暂态计算机可读存储介质,其中非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行权利要求1-9任一所述的运动图像生成方法。A non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to cause a computer to execute the moving image generating method according to any one of claims 1-9.
PCT/CN2019/073077 2018-06-29 2019-01-25 Moving image generation method and apparatus, and electronic device and computer-readable storage medium WO2020001016A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810699053.5A CN108961314B (en) 2018-06-29 2018-06-29 Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium
CN201810699053.5 2018-06-29

Publications (1)

Publication Number Publication Date
WO2020001016A1 true WO2020001016A1 (en) 2020-01-02

Family

ID=64484574

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073077 WO2020001016A1 (en) 2018-06-29 2019-01-25 Moving image generation method and apparatus, and electronic device and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN108961314B (en)
WO (1) WO2020001016A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838118A (en) * 2021-09-08 2021-12-24 杭州逗酷软件科技有限公司 Distance measuring method and device and electronic equipment
CN112001937B (en) * 2020-09-07 2023-05-23 中国人民解放军国防科技大学 Group chase and escape method and device based on visual field perception

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961314B (en) * 2018-06-29 2021-09-17 北京微播视界科技有限公司 Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101929836A (en) * 2009-06-25 2010-12-29 深圳泰山在线科技有限公司 Object dimensional positioning method and camera
CN105427371A (en) * 2015-12-22 2016-03-23 中国电子科技集团公司第二十八研究所 Method for keeping graphic object equal-pixel area display in three-dimensional perspective projection scene
CN105427361A (en) * 2015-11-13 2016-03-23 中国电子科技集团公司第二十八研究所 Method for displaying movable target trajectory in three-dimensional scene
CN108961314A (en) * 2018-06-29 2018-12-07 北京微播视界科技有限公司 Moving image generation method, device, electronic equipment and computer readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8866821B2 (en) * 2009-01-30 2014-10-21 Microsoft Corporation Depth map movement tracking via optical flow and velocity prediction
CN101872414B (en) * 2010-02-10 2012-07-25 杭州海康威视软件有限公司 People flow rate statistical method and system capable of removing false targets
CN102999152B (en) * 2011-09-09 2016-06-29 康佳集团股份有限公司 A kind of gesture motion recognition methods and system
CN103345301B (en) * 2013-06-18 2016-08-10 华为技术有限公司 A kind of depth information acquisition method and device
CN105488815B (en) * 2015-11-26 2018-04-06 北京航空航天大学 A kind of real-time objects tracking for supporting target size to change
CN106446926A (en) * 2016-07-12 2017-02-22 重庆大学 Transformer station worker helmet wear detection method based on video analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101929836A (en) * 2009-06-25 2010-12-29 深圳泰山在线科技有限公司 Object dimensional positioning method and camera
CN105427361A (en) * 2015-11-13 2016-03-23 中国电子科技集团公司第二十八研究所 Method for displaying movable target trajectory in three-dimensional scene
CN105427371A (en) * 2015-12-22 2016-03-23 中国电子科技集团公司第二十八研究所 Method for keeping graphic object equal-pixel area display in three-dimensional perspective projection scene
CN108961314A (en) * 2018-06-29 2018-12-07 北京微播视界科技有限公司 Moving image generation method, device, electronic equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001937B (en) * 2020-09-07 2023-05-23 中国人民解放军国防科技大学 Group chase and escape method and device based on visual field perception
CN113838118A (en) * 2021-09-08 2021-12-24 杭州逗酷软件科技有限公司 Distance measuring method and device and electronic equipment

Also Published As

Publication number Publication date
CN108961314B (en) 2021-09-17
CN108961314A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
US11354825B2 (en) Method, apparatus for generating special effect based on face, and electronic device
US10198823B1 (en) Segmentation of object image data from background image data
US10043308B2 (en) Image processing method and apparatus for three-dimensional reconstruction
US10217195B1 (en) Generation of semantic depth of field effect
US11176355B2 (en) Facial image processing method and apparatus, electronic device and computer readable storage medium
CN108986016B (en) Image beautifying method and device and electronic equipment
US8879803B2 (en) Method, apparatus, and computer program product for image clustering
TWI484444B (en) Non-transitory computer readable medium, electronic device, and computer system for face feature vector construction
WO2020019664A1 (en) Deformed image generation method and apparatus based on human face
JP7106687B2 (en) Image generation method and device, electronic device, and storage medium
WO2020135529A1 (en) Pose estimation method and apparatus, and electronic device and storage medium
WO2019242271A1 (en) Image warping method and apparatus, and electronic device
WO2019218880A1 (en) Interaction recognition method and apparatus, storage medium, and terminal device
WO2021213067A1 (en) Object display method and apparatus, device and storage medium
US11308655B2 (en) Image synthesis method and apparatus
WO2018082308A1 (en) Image processing method and terminal
WO2020001016A1 (en) Moving image generation method and apparatus, and electronic device and computer-readable storage medium
JP2011134114A (en) Pattern recognition method and pattern recognition apparatus
US20210281744A1 (en) Action recognition method and device for target object, and electronic apparatus
CN109063776B (en) Image re-recognition network training method and device and image re-recognition method and device
WO2019237747A1 (en) Image cropping method and apparatus, and electronic device and computer-readable storage medium
JP7443647B2 (en) Keypoint detection and model training method, apparatus, device, storage medium, and computer program
WO2020037924A1 (en) Animation generation method and apparatus
CN110765926B (en) Picture book identification method, device, electronic equipment and storage medium
US11647294B2 (en) Panoramic video data process

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19824707

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.04.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19824707

Country of ref document: EP

Kind code of ref document: A1