Video Encoding Quality Through The Use Of Oncamera Sensor Information

Abbas; Adeel ;   et al.

Patent Application Summary

U.S. patent application number 15/462580 was filed with the patent office on 2019-09-19 for video encoding quality through the use of oncamera sensor information. The applicant listed for this patent is GOPRO, INC.. Invention is credited to Adeel Abbas, Sumit Chawla, Sandeep Doshi.

Application Number20190289322 15/462580
Document ID /
Family ID67906335
Filed Date2019-09-19

United States Patent Application 20190289322
Kind Code A1
Abbas; Adeel ;   et al. September 19, 2019

VIDEO ENCODING QUALITY THROUGH THE USE OF ONCAMERA SENSOR INFORMATION

Abstract

Systems and methods for utilizing on-camera sensor information to improve video and/or image encoding quality are discussed herein. Specifically, the systems and methods described herein may utilize on-camera sensor information to adaptively adjust an intra frame insertion rate associated with a sequence of frames. The intra frame insertion rate associated with a sequence of frames may be adjusted based on the motion of the image capturing device while capturing the sequence of frames and a predefined motion threshold associated with the intra frame insertion rate of the sequence of frames. In some implementations, the intra frame insertion rate may be adjusted based on the activity being performed during the capture of the sequence of frames. As such, the encoding of one or more frames within a sequence of frames may be adaptively adjusted to better suit the scene depicted by the sequence of frames.


Inventors: Abbas; Adeel; (San Mateo, CA) ; Doshi; Sandeep; (San Mateo, CA) ; Chawla; Sumit; (San Mateo, CA)
Applicant:
Name City State Country Type

GOPRO, INC.

San Mateo

CA

US
Family ID: 67906335
Appl. No.: 15/462580
Filed: March 17, 2017

Related U.S. Patent Documents

Application Number Filing Date Patent Number
62423198 Nov 16, 2016

Current U.S. Class: 1/1
Current CPC Class: H04N 19/107 20141101; H04N 19/172 20141101; H04N 19/593 20141101; H04N 19/46 20141101; H04N 19/139 20141101; H04N 19/177 20141101; H04N 19/426 20141101; H04N 19/137 20141101; H04N 19/167 20141101; H04N 19/182 20141101; H04N 19/527 20141101; H04N 19/43 20141101; H04N 19/134 20141101; H04N 19/543 20141101; H04N 19/44 20141101; H04N 19/56 20141101; H04N 19/86 20141101; H04N 5/23258 20130101; G06T 7/20 20130101; H04N 19/114 20141101; H04N 19/51 20141101
International Class: H04N 19/593 20060101 H04N019/593; H04N 19/134 20060101 H04N019/134; H04N 19/51 20060101 H04N019/51; H04N 19/86 20060101 H04N019/86; H04N 19/114 20060101 H04N019/114; H04N 19/177 20060101 H04N019/177; H04N 5/232 20060101 H04N005/232

Claims



1. A system that adaptively encodes a sequence of frames based on sensor information obtained from an image capturing device, the system comprising: one or more physical computer processors configured by computer readable instructions to: obtain video information related to video content, the video content comprising a plurality of frames captured in a sequence by an image capturing device, wherein the plurality of frames comprise a sequence of frames comprising at least a first frame captured at a first time and a second frame captured at a second time; determine an intra frame insertion rate associated with the sequence of frames, wherein the intra frame insertion rate identifies a frequency of intra frames within a predefined number of frames, the first frame encoded as an intra frame and the second frame being within the predefined number of frames of the first frame; obtain a predefined motion threshold; obtain motion information characterizing first motion of the image capturing device between the first time and the second time, wherein the motion information is generated by one or more motion sensors associated with the image capturing device; and encode the plurality of frames in accordance with the intra frame insertion rate unless the motion of the image capture device exceeds the predefined motion threshold such that: the second frame is encoded as the inter frame based on the first motion of the image capturing device not exceeding the predefined motion threshold; the second frame is encoded as the intra frame based on the first motion of the image capturing device exceeding the predefined motion threshold; and frames captured subsequent to the second frame is encoded in accordance with the intra frame insertion rate unless the motion of the image capture device exceeds the predefined motion threshold.

2. The system of claim 1, wherein the first frame and the second frame are adjacent frames within the sequence of frames.

3. The system of claim 1, wherein the one or more motion sensors include one or more of an inertial measurement unit, a GPS component, an accelerometer, a gyroscope, an altimeter, and/or a distance measurement sensor.

4. The system of claim 1, wherein the motion information characterizing motion of the image capturing device includes one or more of zoom, translational motion, rotational motion, vertical motion, horizontal motion, acceleration motion, and/or deceleration motion.

5. (canceled)

6. The system of claim 1, wherein the predefined motion threshold is associated with the intra frame insertion rate.

7. The system of claim 1, wherein the one or more physical computer processors are further configured to: obtain an identification of an activity being performed during capture of the sequence of frames, wherein the second frame is encoded as an intra frame based on the identified activity.

8. The system of claim 7, wherein to obtain the identification of the activity being performed during capture of the sequence of frames, the one or more physical processors are further configured to: obtain one or more predefined activity profiles associated with one or more activities, wherein the one or more predefined activity profiles comprise predefined motion information characterizing the one or more activities; compare the motion information characterizing the motion of the image capturing device with the predefined motion information; and determine a match between the motion information and predefined motion information characterizing at least one of the one or more activities.

9. The system of claim 7, wherein to encode the second frame as an intra frame based on the identified activity, the one or more physical processors are further configured to: obtain a predefined activity profile associated with the identified activity, wherein the predefined activity profile comprises a predefined intra frame insertion rate; and determine the intra frame insertion rate as the predefined intra frame insertion rate.

10. (canceled)

11. A method of adaptively encoding a sequence of frames based on sensor information obtained from an image capturing device, the method comprising: obtaining video information related to video content, the video content comprising a plurality of frames captured in a sequence by an image capturing device, wherein the plurality of frames comprise a sequence of frames comprising at least a first frame captured at a first time and a second frame captured at a second time; determining an intra frame insertion rate associated with the sequence of frames, wherein the intra frame insertion rate identifies a frequency of intra frames within a predefined number of frames, the first frame encoded as an intra frame and the second frame being within the predefined number of frames of the first frame; obtaining a predefined motion threshold; obtaining motion information characterizing first motion of the image capturing device between the first time and the second time, wherein the motion information is generated by one or more motion sensors associated with the image capturing device; and encoding the plurality of frames in accordance with the intra frame insertion rate unless the motion of the image capture device exceeds the predefined motion threshold such that: the second frame is encoded as the inter frame based on the first motion of the image capturing device not exceeding the predefined motion threshold; the second frame is encoded as the intra frame based on the first motion of the image capturing device exceeding the predefined motion threshold; and frames captured subsequent to the second frame is encoded in accordance with the intra frame insertion rate unless the motion of the image capture device exceeds the predefined motion threshold.

12. The method of claim 11, wherein the first frame and the second frame are adjacent frames within the sequence of frames.

13. The method of claim 11, wherein the one or more motion sensors include one or more of an inertial measurement unit, a GPS component, an accelerometer, a gyroscope, an altimeter, and/or a distance measurement sensor.

14. The method of claim 11, wherein the motion information characterizing motion of the image capturing device includes one or more of zoom, translational motion, rotational motion, vertical motion, horizontal motion, acceleration motion, and/or deceleration motion.

15. (canceled)

16. The method of claim 11, wherein the predefined motion threshold is associated with the intra frame insertion rate.

17. The method of claim 11, the method further comprising: obtaining an identification of an activity being performed during capture of the sequence of frames, wherein the second frame is encoded as an intra frame based on the identified activity.

18. The method of claim 17, wherein obtaining the identification of the activity being performed during capture of the sequence of frames comprises: obtaining one or more predefined activity profiles associated with one or more activities, wherein the one or more predefined activity profiles comprise predefined motion information characterizing the one or more activities; comparing the motion information characterizing the motion of the image capturing device with the predefined motion information; and determining a match between the motion information and predefined motion information characterizing at least one of the one or more activities.

19. The method of claim 17, wherein encoding the second frame as an intra frame based on the identified activity comprises: obtaining a predefined activity profile associated with the identified activity, wherein the predefined activity profile comprises a predefined intra frame insertion rate; and determining the intra frame insertion rate as the predefined intra frame insertion rate.

20. (canceled)

21. The system of claim 1, wherein the first frame and the second frame are non-adjacent frames within the sequence of frames.

22. The method of claim 11, wherein the first frame and the second frame are non-adjacent frames within the sequence of frames.
Description



FIELD OF THE INVENTION

[0001] The field of the invention relates generally to video encoding systems and methods, and more particularly, to systems and methods for utilizing on-camera sensor information to improve video encoding quality.

BACKGROUND

[0002] Image capturing devices, such as cameras, are used to capture photos and videos of individuals, buildings, landscapes, objects, and/or other images within the surroundings of the image capturing device. Oftentimes, image capturing devices may be used to capture videos of an activity or event in which the image capturing device may be moving and/or rotating as the activity or event is being captured. This movement and/or rotation of the image capturing device may present various problems for encoders attempting to generate an encoded video file using the video footage captured by the image capturing device.

[0003] When captured on video, a particular arrangement of a group of pixels within a frame may comprise a visual representation of an object within the frame. When encoding video footage, movement of the arrangement of the group of pixels from frame-to-frame may be tracked based on the position and/or particular arrangement of the group of pixels in a previous frame. For example, in a given frame, an arrangement of a group of pixels may be searched for in a fixed area based on the last known position of the arrangement within a previous frame. If located, the knowledge of the arrangement's movement may be used to reduce and/or eliminate temporal redundancy by encoding the frame in terms of the transformation of a previous, reference frame to the frame (i.e., by encoding the frame as an inter frame).

[0004] When movement and/or rotation of an image capturing device results in a failure to locate a particular arrangement of a group of pixels from a previous frame within a given frame, the given frame may have to be encoded as an intra frame as opposed to an inter frame. An inter frame is encoded by reference to one or more related frames in a set of grouped frames, whereas an intra frame is encoded without reference to one or more related frames and thus contains all of its own information. While intra frames require less processing power than inter frames, intra frames utilize a greater number of bits than inter frames. As such, when movement and/or rotation of an image capturing device causes a greater number of frames to be encoded as intra frames, an encoder may reach its bit limit sooner, requiring the encoder to enter a bit rate preservation mode. This often results in degraded video quality.

[0005] These and additional problems exist when attempting to generate an encoded video file using video footage captured by a moving and/or rotating image capturing device.

SUMMARY

[0006] This disclosure relates to systems and methods for utilizing on-camera sensor information to improve video encoding quality. The systems and methods described herein may utilize on-camera sensor information to adaptively adjust an intra frame insertion rate associated with a sequence of frames. An intra frame insertion rate may identify a frequency of intra frames within a predefined number of frames. Different intra frame insertion rates may better suit different scenes based on the complexity of the scene and/or the movement and/or rotation of the image capturing device while capturing the scene. The systems and methods described herein may adaptively adjust an intra frame insertion rate associated with a sequence of frames based on the motion of the image capturing device while capturing the sequence of frames and a predefined motion threshold associated with the intra frame insertion rate of the sequence of frames. In some implementations, the systems and methods described herein may adaptively adjust an intra frame insertion rate associated with a sequence of frames based on the activity being performed during the capture of the sequence of frames. As such, the encoding of one or more frames within a sequence of frames may be adaptively adjusted to better suit the scene depicted by the sequence of frames.

[0007] In various implementations, the system described herein may be configured to utilize on-camera sensor information to adaptively adjust an intra frame insertion rate associated with a sequence of frames, in accordance with one or more implementations. The system may include one or more of an interface, one or more physical processors, an electronic storage, an image capturing device, and/or other components. The one or more physical processors may be configured by computer-readable instructions. Executing the computer-readable instructions may cause the one or more physical processors to utilize on-camera sensor information to improve video encoding quality. The computer-readable instructions may include one or more computer program components. The computer program components may include one or more of a frame rate determination component, a motion component, a frame rate adjustment component, an activity identification component, a video and/or image compression component, and/or other computer program components.

[0008] The frame rate determination component may be configured to determine an intra frame insertion rate associated with a sequence of frames captured by an image capturing device. The sequence of frames may comprise at least a first frame captured at a first time by the image capturing device.

[0009] The motion component may be configured to obtain motion information characterizing motion of the image capturing device over time. For example, the motion component may be configured to obtain motion information characterizing motion of the image capturing device between a first time at which a first frame was captured and a second time at which a second frame was captured. As such, the second frame may come after the first frame within a sequence of frames. The first frame and the second frame may be adjacent within the sequence of frames. In some implementations, other frames may separate the first frame and the second frame within the sequence of frames.

[0010] The motion information may be generated by one or more motion sensors associated with the image capturing device. As a non-limiting example, the one or more motion sensors may include one or more of an inertial measurement unit, a GPS component, an accelerometer, a gyroscope, an altimeter, a distance measurement sensor, a magnetometer, a magnetic position sensor, a radio-frequency position sensor, and/or other sensors. The motion information may characterize one or more motions. The one or more motions may include one or more of movement and/or rotation of the image capturing device and/or the image sensor(s), change in position of the image capturing device and/or the image sensor(s), and/or other motion of the image capturing device and/or the image sensor(s). The motion information characterizing motion of the image capturing device may include one or more of translational motion, rotational motion, vertical motion, horizontal motion, acceleration motion, deceleration motion, altitudinal motion, and/or other types of motion.

[0011] The frame rate adjustment component may be configured to adjust an intra frame insertion rate associated with a sequence of frames based on motion information characterizing motion of the image capturing device during the capture of one or more frames. For example, the frame rate adjustment component may be configured to determine, based on motion information, whether the motion of an image capturing device exceeds a predefined motion threshold. In various implementations, the predefined motion threshold may be associated with the intra frame insertion rate for the sequence of frames.

[0012] The activity identification component may be configured to obtain an identification of an activity being performed during capture of the sequence of frames. For example, the activity identification component may receive identification of the activity from a user associated with the image capturing device or identify the activity based on the motion information. The activity identification component may be configured to obtain and/or access one or more predefined activity profiles associated with one or more activities. The one or more predefined activity profiles may each comprise predefined motion information characterizing the motion of the activity to which they relate and/or a predefined intra frame insertion rate. In various implementations, the activity identification component may identify the activity being performed during capture of the sequence of frames by comparing and determining a match between the motion information characterizing the motion of the image capturing device and motion information characterizing the motion associated with an identified activity from a predefined activity profile.

[0013] The video and/or image compression component may be configured to receive a sequence of frames and generate compressed video information as an output. For example, the video compression component may be configured to encode a frame as an intra frame based on an intra frame insertion rate associated with a sequence of related frames, motion information characterizing the motion of the image capturing device between a first time at which a previous frame was captured and a second time at which the given frame was captured, a determination that the motion information indicates that the motion of the image capturing device exceeds a predefined motion threshold, an identified activity being performed during capture of the sequence of frames, a predefined intra frame insertion rate associated with an identified activity, and/or one or more other factors. The video and/or image compression component may be configured to generate compressed video information and/or image information using one or more compression methods. For example, the video and/or image compression component could utilize HEVC, H.264/AVC, AV1, CineForm, and/or other codec to generate compressed video and/or image information.

[0014] In various implementations, the video and/or image compression component may be configured to encode a frame as an intra frame based on a determination by the frame rate adjustment component that the motion information indicates that the motion of the image capturing device exceeds a predefined motion threshold. If the frame rate adjustment component determines that the motion information indicates that the motion of the image capturing device is less than or equal to the predefined motion threshold, the video and/or image compression component may be configured to encode the frame as an inter frame.

[0015] In various implementations, the video and/or image compression component may be configured to encode a frame as an intra frame based on an activity being performed during capture of the sequence of frames identified by the activity identification component. For example, the video and/or image compression component may be configured to encode a frame as an intra frame or an inter frame based on a predefined intra frame insertion rate associated with an activity identified by the activity identification component.

[0016] When encoding a frame as an inter frame, the video and/or image compression component may be configured to encode the frame by referencing one or more previous frames. The video and/or image compression component may be configured to use motion estimation/compensation to encode the video data, such that the second frame may be described in terms of the transformation (e.g., differences) from a first frame to a second frame.

[0017] These and other objects, features, and characteristics of the system and/or method disclosed herein, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of "a", "an", and "the" include plural referents unless the context clearly dictates otherwise.

BRIEF DESCRIPTION OF THE DRAWINGS

[0018] FIG. 1 illustrates a system for utilizing on-camera sensor information to adaptively adjust an intra frame insertion rate, in accordance with one or more implementations.

[0019] FIG. 2 illustrates a first frame captured by an image capturing device, in accordance with one or more implementations.

[0020] FIGS. 3A-B illustrate a second frame captured by an image capturing device, in accordance with one or more implementations.

[0021] FIG. 4 illustrates a method for adaptively adjusting an intra frame insertion rate associated with a sequence of frames, in accordance with one or more implementations.

[0022] FIG. 5 illustrates a method for adaptively adjusting an intra frame insertion rate based on a predefined motion threshold and motion information characterizing the motion of an image capturing device, in accordance with one or more implementations.

[0023] FIG. 6 illustrates a method for adaptively adjusting an intra frame insertion rate based on an activity being performed during the capture of the sequence of frames, in accordance with one or more implementations.

[0024] FIGS. 7A-B depict exemplary GOP structures that may be utilized in video encoding.

DETAILED DESCRIPTION

[0025] FIG. 1 illustrates a system 100 for utilizing on-camera sensor information to adaptively adjust an intra frame insertion rate, in accordance with one or more implementations. The system may include one or more of interface 102, one or more physical processors 110, electronic storage/transmission 130, image capturing device 140, and/or other components.

[0026] The one or more physical processors 110 (also interchangeably referred to herein as processor(s) 110, processor 110, or processors 110 for convenience) may be configured to provide information processing capabilities in system 100. As such, the processor(s) 110 may comprise one or more of a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Processor(s) 110 may be configured to execute one or more computer readable instructions 112. Computer readable instructions 112 may include one or more computer program components. Computer readable instructions 112 may include one or more of frame rate determination component 114, motion component 116, frame rate adjustment component 118, activity identification component 120, video compression component 122, and/or other computer program components. As used herein, for convenience, the various computer readable instructions 112 will be described as performing an operation, when, in fact, the various instructions 112 program the processor(s) 110 (and therefore system 100) to perform the operation.

[0027] Frame rate determination component 114 may be configured to determine an intra frame insertion rate associated with a sequence of frames captured by an image capturing device (e.g., image capturing device 140). An intra frame insertion rate may identify a frequency of intra frames within a predefined number of frames. Image capturing device 140 may capture visual content, including a sequence of frames comprising at least a first frame captured at a first point in time. Visual content may include one or more of an image, a sequence of images, a frame of a video, a video, audio content, and/or other visual content. The video may be composed of many still images, also known as frames. The frames may compose a moving picture, such as the visual content.

[0028] The visual content (e.g., the visual content including the first frame captured at the first point in time) may be obtained through use of one or more image sensors 144. In some implementations, the image sensor(s) 144 may be carried (e.g., attached to, supported, held, disposed on, and/or otherwise carried) by an object (e.g., a gimbal). In some implementations, the image sensor(s) 144 may be carried by a vehicle (e.g., a car, a bike, a boat, an airplane, and/or other vehicle). In some implementations, the image sensor(s) 144 may be carried by a remote controlled vehicle (e.g., remote controlled airplane, remote controlled car, remoted controlled submarine, and/or other remote controlled vehicle). In some implementations, the image sensor(s) 144 may be carried by an unmanned aerial vehicle (e.g., drones and/or other unmanned aerial vehicle). In some implementations, the image sensor(s) 144 may be carried by a person. In some implementations, the image sensor(s) 144 may be carried by an animal. Other carryings of the image sensor(s) 144 may be contemplated.

[0029] Motion component 116 may be configured to obtain motion information characterizing motion of an image capturing device over time. For example, motion component 116 may be configured to obtain motion information characterizing motion of image capturing device 140 between a first time at which a first frame was captured and a second time at which a second frame was captured. The second point in time may come after the first point in time. As such, the second frame may come after the first frame within a sequence of frames. The first frame and the second frame may be adjacent within the sequence of frames. In some implementations, other frames may separate the first frame and the second frame within the sequence of frames.

[0030] The motion information may be generated by one or more motion sensors 146 associated with image capturing device 140. As a non-limiting example, one or more motion sensors 146 may include one or more of an inertial measurement unit, a GPS component, an accelerometer, a gyroscope, an altimeter, a distance measurement sensor, a magnetometer, a magnetic position sensor, a radio-frequency position sensor, and/or other sensors. In some implementations, motion component 116 may be configured to obtain motion information from one or more motion sensors located external to image capturing device 140 and/or system 100 and may provide information obtained via the one or more motion sensors external to image capturing device 140 and/or system 100.

[0031] The motion information may characterize one or more motions. The one or more motions may include one or more of movement and/or rotation of image capturing device 140 and/or image sensor(s) 144, change in position of image capturing device 140 and/or image sensor(s) 144, and/or other motion of image capturing device 140 and/or image sensor(s) 144. The motion information characterizing motion of image capturing device 140 may include one or more of translational motion, rotational motion, vertical motion, horizontal motion, acceleration motion, deceleration motion, altitudinal motion, and/or other types of motion.

[0032] Image capturing device 140 and/or image sensor(s) 144 associated with image capturing device 140 may move for various reasons. The videographer may move image capturing device 140 and/or image sensor(s) 144 to capture one or more objects within a field of view in a slightly different field of view by moving image capturing device 140 in a different direction, by increasing or decreasing the altitude of image capturing device 140, by rotating image capturing device 140, and/or by moving image capturing device 140 in another manner. In various implementations, motion component 116 may be configured to analyze the motion of image capturing device 140 based on the motion information.

[0033] In an exemplary implementation, motion component 116 may be configured to obtain motion information from one or more motion sensors 146 characterizing a rotational motion of image capturing device 140 between the first point in time at which the first frame may have been captured and a second point in time at which a second frame may be captured. Image capturing device 140 may have moved (e.g., rotated) to the right, left, up, down, or any other direction by a number of degrees represented by a coordinate system, such as a Cartesian coordinate system, a cylindrical and/or polar coordinate system, a spherical and/or polar coordinate system, and/or any other coordinate system. For example, image capturing device 140 may have rotated clockwise by 45-degrees between the first point in time and the second point in time, thus capturing a slightly different field of view within the second frame at the second point in time than within the first frame at the first point in time. Due to movement and/or rotation of image capturing device 140, the position of an object within the field of view of image capturing device 140 may be displaced between the first frame and the second frame. As such, an arrangement of a group of pixels comprising a visual representation of the object within the first frame, captured at the first point in time, may have moved to a different position or have been modified within the second frame, captured at the second point in time.

[0034] For example, and referring to FIG. 2, first frame 200 is depicted. A first object represented by arrangement of pixels 202 and a second object represented by arrangement of pixels 204 are depicted within first frame 200 captured by an image capturing device (e.g., image capturing device 140) at a first point in time.

[0035] Referring to FIG. 3A and FIG. 3B, second frame 300 is depicted. Motion component 116 may be configured to obtain motion information characterizing motion of image capturing device 140 between the first point in time at which first frame 200 was captured and a second point in time at which second frame 300 was captured. Based upon the motion information, motion component 116 may be configured to determine that image capturing device 140 rotated clockwise by 45-degrees, while remaining at the same level or altitude. As such, FIG. 3A is a depiction of second frame 300 captured at the 45-degree clockwise rotation. FIG. 3B depicts second frame 300 without the 45-degree clockwise rotation from FIG. 3A (e.g., similar to the depiction of first frame 200 from FIG. 2). Based upon the 45-degree clockwise rotation, the first object and the second object appear to have moved within the field of view of image capturing device 140. For example, arrangement of pixels 202 representing the first object appears to have moved from the top left corner of first frame 200 (as depicted in dashed lines within FIG. 3A and FIG. 3B) to the bottom left corner of second frame 300 (now depicted by arrangement of pixels 302 in FIG. 3A and FIG. 3B). Similarly based upon the 45-degree clockwise rotation, arrangement of pixels 204 representing the second object appears to have moved from the bottom right corner of first frame 200 (as depicted in dashed lines within FIG. 3A and FIG. 3B) to the top right corner of second frame 300 (now depicted by arrangement of pixels 304 in FIG. 3A and FIG. 3B).

[0036] In some implementations, movement and/or rotation of image capturing device 140 may result in a failure to locate arrangement of pixels 302 and/or arrangement of pixels 304 in second frame 300. This failure may result in second frame 300 having to be encoded as an intra frame as opposed to an intra frame. When movement and/or rotation of an image capturing device causes a greater number of frames to be encoded as intra frames, an encoder may reach its bit limit sooner, requiring the encoder to enter a bit rate preservation mode, resulting in degraded video quality.

[0037] Referring back to FIG. 1, frame rate adjustment component 118 may be configured to adjust an intra frame insertion rate associated with a sequence of frames based on motion information characterizing the motion of image capturing device 140 during the capture of one or more frames. The intra frame insertion rate for a group of frames may affect various parameters associated with the encoded video file. For example, the intra frame insertion rate may affect the available memory and/or power, the ability to randomly access individual frames, compression efficiency, and/or system latency.

[0038] FIG. 7A and FIG. 7B depict exemplary GOP structures with varying intra frame insertion rates that may be utilized in video encoding. A GOP (or "group of pictures") is a group of successive pictures within a coded video stream. A GOP structure may specify the order in which types of frames are arranged within a group of successive pictures. FIG. 7A depicts an example of an IPPP3 GOP structure, wherein three inter frames (or P frames) are located between any two intra frames (or I frames). FIG. 7B depicts an example of an IBBP8 GOP structure, wherein eight frames (i.e., a combination of eight bi-predictive (or B frames) and inter frames) are located between any two intra frames.

[0039] Typically, an intra frame insertion rate is predefined and does not change based on the motion of an image capturing device. In some cases, an intra frame insertion rate may be modified based on whether movement and/or rotation of an image capturing device results in a failure to locate a particular arrangement of a group of pixels from a previous frame within a given frame, as described above with respect to FIG. 3A and FIG. 3B. Regardless, different intra frame insertion rates may better suit different scenes based on the complexity of the scene and/or the movement and/or rotation of the image capturing device while capturing the scene.

[0040] Referring back to FIG. 1, frame rate adjustment component 118 may be configured to adjust an intra frame insertion rate associated with a sequence of frames based on motion information characterizing the motion of image capturing device 140 to better suit the scene depicted by the sequence of frames. For example, frame rate adjustment component may be configured to adjust an intra frame insertion rate based on a predefined motion threshold.

[0041] In various implementations, frame rate adjustment component 118 may obtain a predefined motion threshold. For example, frame rate adjustment component 118 may obtain a predefined motion threshold from electronic storage/transmission 130. In some implementations, frame rate adjustment component 118 may receive a predefined motion threshold from a user and/or an administrator or other user who manages system 100.

[0042] A predefined motion threshold may indicate the motion allowability within a sequence of frames. A predefined motion threshold may define the motion allowability in terms of motion information obtained from one or more motion sensors 146. In various implementations, a predefined motion threshold may be associated with a specific intra frame insertion rate. For example, frame rate adjustment component 118 may obtain a predefined motion threshold from a frame rate profile 132 associated with the intra frame insertion rate of the sequence of frames determined by frame rate determination component 114.

[0043] In various implementations, frame rate adjustment component 118 may be configured to determine, based on motion information characterizing the motion of image capturing device 140, whether the motion of image capturing device 140 exceeds a predefined motion threshold. For example, frame rate adjustment component 118 may be configured to compare the motion of image capturing device 140 (as characterized by the motion information) with a predefined motion threshold. Based on the comparison, frame rate adjustment component 118 may be configured to determine whether the motion of image capturing device 140 exceeds the predefined motion threshold.

[0044] In various implementations, frame rate adjustment component 118 may be configured to adjust an intra frame insertion rate associated with a sequence of frames. For example, frame rate adjustment component 118 may be configured to identify a intra frame insertion rate based on the motion information and output the identified intra frame insertion rate to video compression component 122. In various implementations, frame rate adjustment component 118 may be configured to identify an intra frame insertion rate associated with a sequence of frames based on a determination that the motion of image capturing device 140 exceeds a predefined motion threshold. In various implementations, frame rate adjustment component 118 may be configured to obtain an intra frame insertion rate associated with an activity being performed during capture of a sequence of frames and identified by activity identification component 120.

[0045] Activity identification component 120 may be configured to obtain an identification of an activity being performed during the capture of a sequence of frames. For example, activity identification component 120 may be configured to receive identification of an activity being performed during capture of a sequence of frames from a user associated with image capturing device 140 and/or identify the activity being performed during capture of the sequence of frames based on motion information characterizing the motion of image capturing device 140 during the capture of the sequence of frames.

[0046] In various implementations, activity identification component 120 may obtain an identification of an activity from among one or more activities associated with one or more activity profiles 134. While capturing a sequence of frames, image capturing device 140 may be carried by a videographer while the videographer participates in various activities. For example, a videographer may be skateboarding, snowboarding, surfing, running, skydiving, and/or performing one or more other activities while capturing a sequence of frames. One or more activity profiles 134 may each relate to skateboarding, snowboarding, surfing, running, skydiving, and/or one or more other activities. In some implementations, activity identification component 120 may be configured to obtain and/or access one or more activity profiles 134 stored in electronic storage/transmission 130.

[0047] In various implementations, activity identification component 120 may be configured to receive identification of an activity being performed during capture of a sequence of frames from a user associated with image capturing device 140. For example, activity identification component 120 may be configured to receive identification of an activity from a user (e.g., a videographer and/or one or more other users of system 100) prior to, during, or after capturing the sequence of frames. Activity identification component may be configured to receive identification of one or more activities and/or an activity profile associated with one or more activities.

[0048] In various implementations, activity identification component 120 may be configured to identify an activity being performed during capture of a sequence of frames based on motion information characterizing the motion of image capturing device 140 during the capture of the sequence of frames. For example, activity identification component 120 may be configured to compare motion information characterizing the motion of image capturing device 140 with predefined motion information characterizing the motion associated with one or more activities. In various implementations, activity identification component 120 may be configured to obtain and/or access predefined motion information characterizing the motion of the one or more activities from one or more activity profiles 134. In some implementations, activity identification component 120 may be configured to identify an activity being performed during the capture of a sequence of frames by determining a match between the motion information characterizing the motion of the image capturing device during the capture of the sequence of frames and motion information characterizing the identified activity from one or more activity profiles 134.

[0049] Video compression component 122 may be configured to receive a sequence of frames and generate compressed video information and/or image information as output. For example, video compression component 122 may be configured to encode a frame within a sequence of frames as an intra frame based on an intra frame insertion rate associated with the sequence of frames, motion information characterizing the motion of image capturing device 140 during the capture of the sequence of frames, a determination that the motion information indicates that the motion of the image capturing device exceeds a predefined motion threshold, an identified activity being performed during capture of the sequence of frames, a predefined intra frame insertion rate associated with an identified activity, and/or one or more other factors. Video compression component 122 may be configured to generate compressed video and/or image information using one or more compression methods. For example, video compression component 122 may utilize HEVC, H.264/AVC, AV1, CineForm, and/or other codec to generate compressed video and/or image information.

[0050] In various implementations, video compression component 122 may be configured to encode a frame according to a predefined intra frame insertion rate based on a determination as to whether motion information characterizing motion of image capturing device 140 during the capture of a sequence of related frames indicates that the motion of the image capturing device exceeds a predefined motion threshold. For example, if frame rate adjustment component 118 determines that the motion of image capturing device 140 exceeds a predefined motion threshold, video compression component 122 may be configured to encode a frame as an intra frame. If frame rate adjustment component 118 determines that the motion of image capturing device 140 does not exceed a predefined motion threshold, video compression component 122 may be configured to encode the frame based on the predefined intra frame insertion rate. For example, video compression component 122 may be configured to encode the frame as an inter frame based on a determination that the motion of image capturing device 140 does not exceed the predefined motion threshold.

[0051] When encoding a frame as an inter frame, video compression component 122 may be configured to encode the frame by referencing one or more previous frames. Video compression component 122 may be configured to use motion estimation/compensation to encode the video data, such that the frame may be described in terms of the transformation (e.g., differences) from a first frame to a second frame.

[0052] In various implementations, video compression component 122 may be configured to encode a frame according to a predefined intra frame insertion rate associated with an activity being performed during capture of a sequence of related frames. For example, video compression component 122 may receive an identification of an activity being performed during the capture of a sequence of frames from activity identification component 120. In some implementations, video compression component 122 may be configured to obtain and/or access an activity profile 134 associated with the identified activity. In various implementations, one or more activity profiles 134 may each comprise a predefined intra frame insertion rate associated with one or more activities. Based on a predefined intra frame insertion rate obtained from an activity profile 134 associated with the identified activity, video compression component 122 may encode a frame as an intra frame, an inter frame, and/or some other type of frame.

[0053] Video compression component 122 may be configured to generate compressed video information resulting in higher overall video quality than with existing video encoding processes. If there is a lot of movement and/or rotation by an image capturing device during the capture of a sequence of frames, encoders may be forced to go into bit preservation mode because limits may exist for constant or near-constant bit rate videos. This may result in an overall degraded video quality. With the improved encoding process disclosed herein, video compression component 122 may be configured to encode a sequence of frames according to an adjusted intra frame insertion rate that better suits the scene depicted by the sequence of frames.

[0054] Electronic storage/transmission 130 may include electronic storage media that electronically stores and/or transmits information. The electronic storage media of electronic storage/transmission 130 may be provided integrally (i.e., substantially non-removable) with one or more components of system 100 and/or removable storage that is connectable to one or more components of system 100 via, for example, a port (e.g., a USB port, a Firewire port, and/or other port) or a drive (e.g., a disk drive and/or other drive). Electronic storage/transmission 130 may include one or more of optically readable storage media (e.g., optical disks and/or other optically readable storage media), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, and/or other magnetically readable storage media), electrical charge-based storage media (e.g., EPROM, EEPROM, RAM, and/or other electrical charge-based storage media), solid-state storage media (e.g., flash drive and/or other solid-state storage media), and/or other electronically readable storage media. Electronic storage/transmission 130 may be a separate component within system 100, or electronic storage/transmission 130 may be provided integrally with one or more other components of system 100 (e.g., processor 110). Although electronic storage/transmission 130 is shown in FIG. 1 as a single entity, this is for illustrative purposes only. In some implementations, electronic storage/transmission 130 may comprise a plurality of storage units. These storage units may be physically located within the same device, or electronic storage/transmission 130 may represent storage functionality of a plurality of devices operating in coordination

[0055] Electronic storage/transmission 130 may store software algorithms, information determined by processor 110, information received remotely, and/or other information that enables system 100 to function properly. For example, electronic storage/transmission 130 may store information relating to an object, the position of an arrangement of a group of pixels within a first frame, motion information characterizing motion of an image capturing device, a first portion of a second frame to search for the object, and/or other information.

[0056] In various implementations, electronic storage/transmission 130 may store one or more frame rate profiles 132. One or more frame rate profiles 132 may comprise information associated with one or more intra frame insertion rates and/or one or more GOP structures and useful when encoding video according to the one or more intra frame insertion rates or manipulating video encoded according to the one or more GOP structures. For example, one or more frame rate profiles 132 may comprise one or more predefined motion thresholds associated with one or more intra frame insertion rates and/or the one or more GOP structures.

[0057] In various implementations, electronic storage/transmission 130 may store one or more activity profiles 134. One or more activity profiles 134 may each be associated with one or more activities that may be performed during the capture of a sequence of frames. For example, one or more activity profiles 134 may each relate to skateboarding, snowboarding, surfing, running, skydiving, and/or one or more other activities. In various implementations, one or more activity profiles 134 may each comprise predefined motion information characterizing the motion of the one or more activities, a predefined intra frame insertion rate associated with one or more activities, and/or other information associated with one or more activities and useful when encoding video that was captured by image capturing device 140 while a user of image capturing device 140 was performing the one or more activities.

[0058] Image capturing device 140 may comprise one or more of a computing platform, a mobile device (e.g., a smart phone, a tablet, and/or other mobile device), a camera (e.g., an action camera, a sports camera, and/or other type of camera), a video recorder, and/or other device configured to capture images and/or video segments. Image capturing device 140 may capture visual content including the first frame captured at the first point in time. Users may capture visual content using image capturing device 140. Image capturing device 140 may include one or more of optical element 142, one or more image sensor(s) 144, one or more motion sensors 146, and/or other components. In various implementations, processor(s) 110 may be located within image capturing device 140.

[0059] Optical element 142 may be configured to guide light to an image sensor (e.g., one or more image sensor(s) 144). Optical element 142 may include one or more of standard lens, macro lens, zoom lens, special-purpose lens, telephoto lens, prime lens, achromatic lens, apochromatic lens, process lens, wide-angle lens, ultra-wide-angle lens, fisheye lens, infrared lens, ultraviolet lens, perspective control lens, other lens, and/or other optical elements. Optical element 142 may guide light received from an object within the field of view of an image sensor (e.g., one or more image sensor(s) 144) directly, or indirectly through use of one or more light manipulating components. For example, a light manipulating component may include one or more of a mirror, a prism, lenses, and/or other light manipulating components.

[0060] One or more image sensor(s) 144 (also interchangeably referred to herein as image sensor(s) 144, image sensor 144, or image sensors 144 for convenience) may be configured to generate a first output signal conveying visual information present in the light guided thereto by optical element 142 within the field of view of one or more image sensor(s) 144. One or more image sensor(s) 144 may include one or more of a charge-coupled device sensor, an active pixel sensor, a complementary metal-oxide semiconductor sensor, an N-type metal-oxide-semiconductor sensor, and/or other image sensors. Visual information may include content within the field of view of one or more image sensor(s) 144, such as one or more objects within the field of view of one or more image sensor(s) 144, a landscape within the field of view of one or more image sensor(s) 144, and/or other content within the field of view of one or more image sensor(s) 144. An image and/or video segment captured by the image capturing device may include visual information and other information, including audio information.

[0061] One or more motion sensors 146 (also interchangeably referred to herein as motion sensor(s) 110, motion sensor 146, or motion sensors 146 for convenience) may be configured to generate motion information characterizing motion of the image capturing device over time. As described further above, one or more motion sensors 146 may generate motion information characterizing motion of the image capturing device between a first time at which a first frame was captured and a second time at which a second frame was captured.

[0062] Implementations of the disclosure may be made in hardware, firmware, software, or any suitable combination thereof. Aspects of the disclosure may be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a tangible computer readable storage medium may include read only memory, random access memory, magnetic disk storage media, optical storage media, flash memory devices, and others, and a machine-readable transmission media may include forms of propagated signals, such as carrier waves, infrared signals, digital signals, and others. Firmware, software, routines, or instructions may be described herein in terms of specific exemplary aspects and implementations of the disclosure, and performing certain actions.

[0063] Although processor 110, electronic storage/transmission 130, and image capturing device 140 are shown to be connected to interface 102 in FIG. 1, any communication medium may be used to facilitate interaction between any components of system 100. One or more components of system 100 may communicate with each other through hard-wired communication, wireless communication, or both. For example, one or more components of system 100 may communicate with each other through a network. For example, processor 110 may wirelessly communicate with electronic storage/transmission 130. By way of non-limiting example, wireless communication may include one or more of radio communication, Bluetooth communication, Wi-Fi communication, cellular communication, infrared communication, or other wireless communication. Other types of communications are contemplated by the present disclosure.

[0064] Although processor 110 is illustrated in FIG. 1 as a single component, this is for illustrative purposes only. In some implementations, processor 110 may comprise a plurality of processing units. These processing units may be physically located within the same device, or processor 110 may represent processing functionality of a plurality of devices operating in coordination. Processor 110 may be configured to execute one or more components by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor 110.

[0065] Furthermore, it should be appreciated that although the various instructions are illustrated in FIG. 1 as being co-located within a single processing unit, in implementations in processor(s) 110 include multiple processing units, one or more instructions may be executed remotely from the other instructions.

[0066] The description of the functionality provided by the different computer-readable instructions described herein is for illustrative purposes, and is not intended to be limiting, as any of instructions may provide more or less functionality than is described. For example, one or more of the instructions may be eliminated, and some or all of its functionality may be provided by other ones of the instructions. As another example, processor(s) 110 may be programmed by one or more additional instructions that may perform some or all of the functionality attributed herein to one of the computer-readable instructions.

[0067] FIG. 4 illustrates a method 400 for adaptively adjusting an intra frame insertion rate associated with a sequence of frames, in accordance with one or more implementations. The operations of method 400 presented below are intended to be illustrative and, as such, should not be viewed as limiting. In some implementations, method 400 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. In some implementations, two or more of the operations may occur substantially simultaneously. The described operations may be accomplished using some or all of the system components described in detail above.

[0068] In some implementations, method 400 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method 400 in response to instructions stored electronically on one or more electronic storage mediums. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 400.

[0069] In an operation 402, method 400 may include determining an intra frame insertion rate associated with a sequence of frames captured by an image capturing device. An intra frame insertion rate may identify a frequency of intra frames within a predefined number of frames. In some implementations, operation 402 may be performed by a processor component the same as or similar to frame rate determination component 114 (shown in FIG. 1 and described herein).

[0070] In an operation 404, method 400 may include obtaining motion information characterizing motion of the image capturing device. The motion information may characterize motion of the image capturing device between the first time at which the first frame was captured and a second time at which a second frame was captured. The motion information may be generated and/or obtained from one or more motion sensors associated with the image capturing device. In some implementations, operation 404 may be performed by a processor component the same as or similar to motion component 116 (shown in FIG. 1 and described herein).

[0071] In an operation 406, method 400 may include encoding the second frame as an intra frame based on the motion information and the intra frame insertion rate associated with the sequence of frames. For example, the second frame may be encoded as an intra frame based on an intra frame insertion rate associated with a sequence of related frames and determined in operation 402, motion information characterizing the motion of an image capturing device during the capture of the sequence of frames and obtained in operation 404, a determination that the motion information indicates that the motion of the image capturing device exceeds a predefined motion threshold, an identified activity being performed during capture of the sequence of frames, a predefined intra frame insertion rate associated with an identified activity, and/or one or more other factors. In some implementations, operation 406 may be performed by a processor component the same as or similar to video compression component 122 (shown in FIG. 1 and described herein).

[0072] FIG. 5 illustrates a method 500 for adaptively adjusting an intra frame insertion rate based on a predefined motion threshold and motion information characterizing the motion of an image capturing device, in accordance with one or more implementations. The operations of method 500 presented below are intended to be illustrative and, as such, should not be viewed as limiting. In some implementations, method 500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. In some implementations, two or more of the operations may occur substantially simultaneously. The described operations may be accomplished using some or all of the system components described in detail above.

[0073] In some implementations, method 500 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method 500 in response to instructions stored electronically on one or more electronic storage mediums. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 500.

[0074] In an operation 502, method 500 may include obtaining motion information characterizing motion of the image capturing device. The motion information may characterize motion of the image capturing device between the first time at which the first frame was captured and a second time at which a second frame was captured. The motion information may be generated and/or obtained from one or more motion sensors associated with the image capturing device. In some implementations, operation 502 may be performed by a processor component the same as or similar to motion component 116 (shown in FIG. 1 and described herein).

[0075] In an operation 504, method 500 may include determining whether the motion of the image capturing device exceeds a threshold. For example, the motion of an image capturing device (as characterized by the motion information) may be compared with a predefined motion threshold. A predefined motion threshold may indicate the motion allowability within a sequence of frames. In various implementations, a predefined motion threshold may be associated with a specific intra frame insertion rate. In some implementations, operation 504 may be performed by a processor component the same as or similar to frame rate adjustment component 118 (shown in FIG. 1 and described herein).

[0076] In an operation 506, method 500 may include encoding the second frame as an intra frame responsive to the determination that the motion of the image capturing device exceeds a predefined motion threshold. For example, if the motion of image capturing device is determined to exceed a predefined motion threshold based on the comparison performed in operation 504, the second frame may be encoded as an intra frame. In some implementations, operation 506 may be performed by a processor component the same as or similar to video compression component 122 (shown in FIG. 1 and described herein).

[0077] In an operation 508, method 500 may include encoding the second frame as an inter frame responsive to the determination that the motion of the image capturing device does not exceed a predefined motion threshold. For example, if the motion of image capturing device is determined to be less than or equal to a predefined motion threshold based on the comparison performed in operation 504, the second frame may be encoded as an inter frame, as opposed to an intra frame. In some implementations, operation 508 may be performed by a processor component the same as or similar to video compression component 122 (shown in FIG. 1 and described herein).

[0078] FIG. 6 illustrates a method 600 for adaptively adjusting an intra frame insertion rate based on an activity being performed during the capture of the sequence of frames, in accordance with one or more implementations. The operations of method 600 presented below are intended to be illustrative and, as such, should not be viewed as limiting. In some implementations, method 600 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. In some implementations, two or more of the operations may occur substantially simultaneously. The described operations may be accomplished using some or all of the system components described in detail above.

[0079] In some implementations, method 600 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method 600 in response to instructions stored electronically on one or more electronic storage mediums. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 600.

[0080] In an operation 602, method 600 may include obtaining motion information characterizing motion of the image capturing device. The motion information may characterize motion of the image capturing device between the first time at which the first frame was captured and a second time at which a second frame was captured. The motion information may be generated and/or obtained from one or more motion sensors associated with the image capturing device. In some implementations, operation 602 may be performed by a processor component the same as or similar to motion component 116 (shown in FIG. 1 and described herein).

[0081] In an operation 604, method 600 may include obtaining and/or accessing one or more predefined activity profiles associated with one or more activities. The one or more predefined activity profiles may each comprise predefined motion information characterizing the motion of the one or more activities, a predefined intra frame insertion rate associated with one or more activities, and/or other information associated with one or more activities and useful when encoding video that was captured by an image capturing device while a user of the image capturing device was performing the one or more activities. In some implementations, operation 406 may be performed by a processor component the same as or similar to activity identification component 120 (shown in FIG. 1 and described herein).

[0082] In an operation 606, method 600 may include comparing the motion information characterizing the motion of the image capturing device with the predefined motion information characterizing the motion of one or more activities. In some implementations, operation 408 may be performed by a processor component the same as or similar to activity identification component 120 (shown in FIG. 1 and described herein).

[0083] In an operation 608, method 600 may include determining a match between the motion information and the predefined motion information characterizing the motion of at least one of the one or more activities. In various implementations, the activity being performed during the capture of the sequence of frames may be identified based on the determined match between the motion information and the predefined motion information characterizing the motion of at least one of the one or more activities. In some implementations, operation 410 may be performed by a processor component the same as or similar to activity identification component 120 (shown in FIG. 1 and described herein).

[0084] In an operation 610, method 600 may include encoding the second frame as an intra frame based on the predefined activity profile associated with the at least one of the one or more activities. For example, the second frame may be encoded as an intra frame based on a predefined intra frame insertion rate associated with the at least one of the one or more activities and obtained from an activity profile associated with the at least one of the one or more activities. In some implementations, operation 412 may be performed by a processor component the same as or similar to video compression component 122 (shown in FIG. 1 and described herein).

[0085] For purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the description. It will be apparent, however, to one skilled in the art that implementations of the disclosure can be practiced without these specific details or with an equivalent arrangement. In some instances, modules, structures, processes, features, and devices are shown in block diagram form in order to avoid obscuring the description. In other instances, functional block diagrams and flow diagrams are shown to represent data and logic flows. The components of block diagrams and flow diagrams (e.g., modules, blocks, structures, devices, features, and/or other components) may be variously combined, separated, removed, reordered, and replaced in a manner other than as expressly described and depicted herein.

[0086] Reference in this specification to "one implementation", "an implementation", "some implementations", "various implementations", "certain implementations", "other implementations", "one series of implementations", or the like means that a particular feature, design, structure, or characteristic described in connection with the implementation is included in at least one implementation of the disclosure. The appearances of, for example, the phrase "in one implementation" or "in an implementation" in various places in the specification are not necessarily all referring to the same implementation, nor are separate or alternative implementations mutually exclusive of other implementations. Moreover, whether or not there is express reference to an "implementation" or the like, various features are described, which may be variously combined and included in some implementations, but also variously omitted in other implementations. Similarly, various features are described that may be preferences or requirements for some implementations, but not other implementations.

[0087] The language used herein has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. Other implementations, uses and advantages of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. The specification should be considered exemplary only, and the scope of the invention is accordingly intended to be limited only by the following claims.

* * * * *

Patent Diagrams and Documents
D00000
D00001
D00002
D00003
D00004
D00005
D00006
XML
US20190289322A1 – US 20190289322 A1

uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed