Video Preview Creation Based On Environment

McIntosh; David ;   et al.

Patent Application Summary

U.S. patent application number 14/173732 was filed with the patent office on 2014-08-07 for video preview creation based on environment. This patent application is currently assigned to REDUX, INC.. The applicant listed for this patent is REDUX, INC.. Invention is credited to David McIntosh, Chris Pennello.

Application Number20140219634 14/173732
Document ID /
Family ID51259288
Filed Date2014-08-07

United States Patent Application 20140219634
Kind Code A1
McIntosh; David ;   et al. August 7, 2014

VIDEO PREVIEW CREATION BASED ON ENVIRONMENT

Abstract

Providing a method for creating and displaying portions of videos called video previews. The video previews may be created using an encoding technique or palette-based optimization technique for the particular user device, application, or network that will display the video preview generated from a portion of the full video. The video previews are configured to play a series of images associated with images from the portion of the full video when the video preview is activated.


Inventors: McIntosh; David; (Del Mar, CA) ; Pennello; Chris; (Berkeley, CA)
Applicant:
Name City State Country Type

REDUX, INC.

Berkeley

CA

US
Assignee: REDUX, INC.
Berkeley
CA

Family ID: 51259288
Appl. No.: 14/173732
Filed: February 5, 2014

Related U.S. Patent Documents

Application Number Filing Date Patent Number
61761096 Feb 5, 2013
61822105 May 10, 2013
61847996 Jul 18, 2013
61905772 Nov 18, 2013

Current U.S. Class: 386/278
Current CPC Class: H04N 21/8455 20130101; H04N 21/482 20130101; G11B 27/034 20130101; G11B 27/105 20130101; H04N 5/04 20130101; H04N 21/8549 20130101; G11B 27/102 20130101; G11B 20/00007 20130101; G06K 9/00751 20130101; H04N 21/47217 20130101; G11B 27/036 20130101; G06F 3/04842 20130101; G06F 3/04855 20130101; G11B 27/34 20130101; G11B 27/031 20130101; G06F 3/0484 20130101
Class at Publication: 386/278
International Class: G11B 27/034 20060101 G11B027/034; G11B 20/00 20060101 G11B020/00

Claims



1. A method of creating a video preview, the method comprising: receiving, at the provider server, a request to generate a video preview of a full video, the request specifying a portion of the full video; receiving at least the portion of the full video at the provider server; receiving an identifier specifying a device or an application for displaying the video preview; determining an encoding technique based on the identifier to generate the video preview, wherein the video preview being of the portion of the full video; creating, by the provider server, the video preview from the full video based on the determined encoding technique; and providing, by the provider server, the video preview to a user device.

2. The method of claim 1, wherein the encoding technique is dependent on the identification of a type of user device that submitted the request to generate the video preview.

3. The method of claim 1, wherein the encoding technique is dependent on the type of application that will display the video preview at the user device.

4. The method of claim 1, wherein the encoding technique includes a graphics interchange format (GIF), MP4 container, a H.264 video codec, an advanced audio coding (AAC) audio codec, WebM container, VP8 video codec, or an Ogg Vorbis audio codec.

5. The method of claim 1, wherein the video preview is a first video preview generated using a first encoding technique, and the method further comprising: generating a second video preview using a second encoding technique, wherein the second video preview is generated to allow the user device to share the second video preview through a particular medium; and providing both the first video preview and second video preview to the user device.

6. The method of claim 1, further comprising: downloading the full video through a plurality of video streams, wherein the plurality of video streams include video content from the full video; and generating one or more video previews from video content received through the plurality of video streams.

7. The method of claim 1, wherein the video preview is a first video preview, the determined encoding technique is a first encoding technique, and the method further comprises: determining a second encoding technique, wherein the second encoding technique is different than the first encoding technique, and wherein the second encoding technique is used to share the second video preview with a different device or application than the first encoding technique; and creating a second video preview using a second encoding technique.

8. The method of claim 1, wherein the encoding technique includes a palette-based size optimization by generating a common color palette for the video preview and limiting the video preview to the common color palette.

9. A method of compressing a video file the method comprising: receiving, at a computer, a request to generate a compressed video file, the request specifying at least a portion of a full video to be used in creating the compressed video file, wherein the specified portion of the full video comprises a plurality of images; determining, by a computer, a palette-based optimization technique to generate the compressed video file; analyzing, by a computer, the plurality of images using the palette-based optimization technique to determine at least one common color palette, each to be used to generate multiple compressed images of the compressed video file; specifying, by a computer, a first multiple compressed images to be generated using a first common color palette; and creating, by a computer, the compressed video file from the plurality of images of the specified portion of the full video such that the first multiple compressed images are rendered using the first common color palette when the compressed video file is viewed.

10. The method of claim 9, further comprising providing, by the provider server, the video preview to a user device.

11. The method of claim 9, wherein the common color palette for the video preview is limited to a single frame in the video preview.

12. The method of claim 9, wherein the at least one common color palette is generated from one or more images that contain at least a specified file size.

13. The method of claim 12, wherein the specified file size is above a threshold.

14. The method of claim 12, wherein the specified file size corresponds to a maximum file size when compared with other images in the portion of the full video.

15. The method of claim 9, wherein the common color palette is generated from the one or more images that are selected periodically from the video preview.

16. The method of claim 9, further comprising: before creating the compressed video file, analyzing the specified portion of the full video; determining a plurality of scenes in the specified portion of the full video based on the analysis; and generating one or more common color palettes for each of the plurality of scenes.

17. The method of claim 9, further comprising: before creating the compressed video file, analyzing the specified portion of the full video; determining a plurality of scenes in the specified portion of the full video; determining an image that is representative of each scene; and generating a common color palette that combines colors from the representative images.

18. The method of claim 9, wherein the common color palette comprises a union of colors identified in the analyzed images.

19. The method of claim 9, wherein the palette-based optimization technique includes an indexed color technique to manage colors in the frame.

20. A computer product comprising a non-transitory computer readable medium storing a plurality of instructions that when executed control a computer system to create a video preview, the instructions comprising: receive a request to generate a video preview of a full video, the request specifying a portion of the full video; receive at least the portion of the full video; receive an identifier specifying a device or an application for displaying the video preview; determine an encoding technique based on the identifier to generate the video preview, wherein the video preview being of the portion of the full video; create the video preview from the full video based on the determined encoding technique; and provide the video preview to a user device.
Description



CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application is a non-provisional application of U.S. Patent Application No. 61/761,096, filed on Feb. 5, 2013, U.S. Patent Application No. 61/822,105, filed on May 10, 2013, U.S. Patent Application No. 61/847,996, filed on Jul. 18, 2013, and U.S. Patent Application No. 61/905,772, filed on Nov. 18, 2013, which are herein incorporated by reference in their entirety for all purposes.

[0002] This application is related to commonly owned and concurrently filed U.S. patent application Ser. No. ______, entitled "Video Preview Creation with Link" (Attorney Docket 91283-000710US-896497), U.S. patent application Ser. No. ______, entitled "User Interface for Video Preview Creation" (Attorney Docket 91283-000720US-897301), U.S. patent application Ser. No. ______, entitled "Video Preview Creation with Audio" (Attorney Docket 91283-000740US-897294), U.S. patent application Ser. No. ______, entitled "Generation of Layout of Videos" (Attorney Docket 91283-000750US-897295), U.S. patent application Ser. No. ______, entitled "Activating a Video Based on Location in Screen" (Attorney Docket 91283-000760US-897296), which are herein incorporated by reference in their entirety for all purposes.

BACKGROUND

[0003] Users commonly provide video content to websites (e.g., YouTube), which can be referred to as "posting a video." The user can spend a significant amount of time to convey the message of the video before the user selects the video (e.g., by clicking the video displayed on a website). For example, the user can associate a title, a static thumbnail image, and/or a textual description with the video. Users often have an difficult time when the video originates on a different website and the user tries to upload their video to a video server. Further, the title may not be descriptive of the contents of the video, the static thumbnail image may not summarize the essence of the video, or the description of the video may be a poor signal for whether the video will be interesting to a viewer.

[0004] Video browsing is also limited. Other users (e.g., viewers) can access and view the video content via the websites. For example, the viewers can see a video's title and static thumbnail of the video before deciding whether to play the full video. However, the viewers may find it difficult to select particular videos of interest because the title may not be descriptive of the contents of the video, the static thumbnail image may not summarize the essence of the video, or the textual description with the video may be a poor signal for whether the video will be interesting to the viewer. Thus, the viewers may spend significant amounts of time searching and watching videos that are not enjoyable to the viewer.

SUMMARY

[0005] Embodiments of the present invention can create and display portions of videos as video previews. The video previews may be associated with a full video, such that the video preview is generated from a portion of the full video. The video previews can be generated in various ways based and an identification of the device, application, or network that will be used to activate or play the video preview. Once activated, the video preview can be configured to play a series of images associated with images from the portion of the full video when the video preview is activated (e.g., to convey the essence of the full video via a video preview).

[0006] Additionally, embodiments of the present invention provide a method for creating video previews without an identification of the device, application, or network that will be used to activate or play the video preview. For example, a computing device can generate multiple video previews in anticipation of a selected medium for activating the video preview. In another example, the computing device can receive parallel input streams of the full video to speed up generation of the multiple video previews.

[0007] Further, embodiments of the present invention provide a method for creating a compressed video file using a palette-based optimization technique. For example, a computing device may create a common color palette among multiple images specified in the full video. The common color palette can be used to generate the compressed video file.

[0008] Other embodiments are directed to systems and computer readable media associated with methods described herein.

[0009] A better understanding of the nature and advantages of the present invention may be gained with reference to the following detailed description and the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

[0010] FIG. 1 shows a flowchart illustrating a method of creating a video preview, organizing the video previews, and providing a user interface that includes the video previews according to an embodiment of the present invention.

[0011] FIG. 2 shows block diagrams of various computing devices used to generate or provide a video preview.

[0012] FIG. 3 shows flowchart illustrating a method of identifying a video preview from a full video according to an embodiment of the present invention.

[0013] FIG. 4 shows illustrations of a video preview displayed with various devices according to an embodiment of the present invention.

[0014] FIG. 5 shows illustrations of a video preview displayed in various applications according to an embodiment of the present invention.

[0015] FIG. 6 shows flowchart illustrating a method of generating a video preview using a palette-based optimization technique according to an embodiment of the present invention.

[0016] FIG. 7 shows an illustration of a common color palette according to an embodiment of the present invention.

[0017] FIG. 8 shows a block diagram of a computer apparatus according to an embodiment of the present invention.

DEFINITIONS

[0018] A "video preview" or "compressed video file" (used interchangeably) is a visual representation of a portion of a video (also referred to as a "full video" to contrast a "video preview" of the video). The full video may correspond to the entirety of a video file or a portion of the video file, e.g., when only a portion of the video file has been streamed to a user device. The preview is shorter than the full video, but the full video can be shorter than the complete video file. The preview can convey the essence of the full video. The video preview is shorter (e.g., fewer images, less time) than a full (e.g., more images, longer time, substantially complete) video. In various embodiments, a preview can be a continuous portion of the full video or include successive frames that are not continuous in the full video (e.g., two successive frames of the preview may actually be one or more seconds apart in the full video).

DETAILED DESCRIPTION

[0019] Embodiments of the present invention can enhance video viewing by providing short, playable video previews through a graphical user interface (GUI) or provided directly to the user device (e.g., stored in a clipboard). Viewers can use the GUI of video previews to better decide whether to watch a full video, or channel of videos.

[0020] In one embodiment, the user may create a video preview that may later be accessed by a viewer. For example, the user may select the best 1-10 seconds of a video to convey the essence of the full video. The video preview can be shorter (e.g., fewer images, less time) than a full (e.g., more images, longer time, substantially complete) video. The system associated with the GUI may generate a smaller file to associate with the video portion (e.g., animated GIF, MP4, collection of frames, RIFF). The system may provide the GUI on a variety of systems. For example, the GUI can be provided via an internet browser or client applications (e.g., software configured to be executed on a device), and configured to run on a variety of devices (e.g., mobile, tablet, set-top, television, game console).

I. Providing Video Previews

[0021] FIG. 1 shows a flowchart illustrating a method 100 of creating a video preview, organizing the video previews, and providing a user interface that includes the video previews according to an embodiment of the present invention. The method 100 may comprise a plurality of steps for implementing an embodiment of creating a video preview based on an environment (e.g., the user device, application, or network that will display a video preview or transfer the video preview to a destination). Various computing devices may be used to perform the steps of the method, including video servers, provider servers, user devices, or third party devices.

[0022] At step 110, a video preview may be generated. Embodiments of the invention may provide a graphical user interface for a user that allows the user to request to generate a video preview, the request specifying a portion of a full video to use as the video preview. The system may generate the video preview based on the type of device or application that will display the video preview (e.g., using input from the user, using information transmitted from the device, using an identifier specifying the device, application, or network that will display the video preview). The input may be active (e.g., the user or device providing an identification the device or application in response to a request, a third party providing information for a plurality of streaming television programs) or passive (e.g., the device transmitting information as a push notification). In response to the input (e.g., identifier), the computing device can determine an encoding technique based on the identifier to generate the video preview and create the video preview from the full video based on the determined encoding technique.

[0023] Additional means of generating video previews can be found in U.S. patent application Ser. No. ______, entitled "Video Preview Creation with Link" (Attorney Docket 91283-000710US-896497), U.S. patent application Ser. No. ______, entitled "User Interface for Video Preview Creation" (Attorney Docket 91283-000720US-897301), and U.S. patent application Ser. No. ______, entitled "Video Preview Creation with Audio" (Attorney Docket 91283-000740US-897294), which are incorporated by reference in their entirety.

[0024] At step 120, one or more video previews may be organized into one or more channels or collections. For example, the method 100 can associate the video preview generated in step 110 (e.g., a 4-second animated GIF of a snowboarder jumping off a ledge) with a channel (e.g., a collection of videos about snowboarders). In some embodiments, the video previews may be organized in a group (e.g., a composite, a playable group, a cluster of video previews) and displayed on a network page. Additional information about the organization and layout of video previews cam be found in U.S. patent application Ser. No. ______, entitled "Generation of Layout of Videos" (Attorney Docket 91283-000750US-897295), which is incorporated by reference in its entirety.

[0025] At step 130, a GUI may be provided with the video previews. For example, the GUI may provide one or more channels (e.g., channel relating to snowboarders, channel relating to counter cultures), one or more videos within a channel (e.g., a first snowboarding video, a second snowboarding video, and a first counter culture video), or a network page displaying one or more video previews. The video previews may be shared through social networking pages, text messaging, or other means. Additional information about viewing and sharing video previews can be found in U.S. patent application Ser. No. ______, entitled "Activating a Video Based on Location in Screen" (Attorney Docket 91283-000760US-897296), which is incorporated by reference in its entirety.

II. System for Providing Video Previews

[0026] Various systems and computing devices can be involved with various workflows used to create a video preview based on the environment that will display the video preview.

[0027] FIG. 2 shows block diagrams of various computing devices used to generate or provide a video preview. For example, the computing devices can include a video server 210, a provider server 220, a user device 230, or a third party server 240 according to an embodiment of the present invention. In some embodiments, any or all of these servers, subsystems, or devices may be considered a computing device.

[0028] The computing devices can be implemented various ways without diverting from the essence of the invention. For example, the video server 210 can provide, transmit, and store full videos and/or video previews (e.g., Ooyala.RTM., Brightcove.RTM., Vimeo.RTM., YouTube.RTM., CNN.RTM., NFL.RTM., Hulu.RTM., Vevo.RTM.. The provider server 220 can interact with the video server 210 to provide the video previews. In some embodiments, the provider server 220 can receive information to generate the video preview (e.g., an identifier specifying a device or application for displaying a video preview, a timestamp of a location in a full video, a request specifying a portion of a full video, a link to the full video, the full video file, a push notification including the link to the full video). The user device 230 can receive a video preview and/or full video to view, browse, or store the generated video previews. The third party server 240 can also receive a video preview and/or full video to view or browse the generated video previews. In some embodiments, the user device 230 or third party server 240 can also be used to generate the video preview or create a frame object. Additional information about the video server 210, provider server 220, user device 230, and third party server 240 can be found in U.S. patent application Ser. No. ______, entitled "Video Preview Creation with Link" (Attorney Docket 91283-000710US-896497) and U.S. patent application Ser. No. ______, entitled "User Interface for Video Preview Creation" (Attorney Docket 91283-000720US-897301), which are incorporated by reference in their entirety.

[0029] The video server 210 or third party server 240 may also be a content provider for a full video, including one or more images contained in the full video, information about the full video (e.g., title, television channel information, television programming information for a user's location). In some embodiments, the third party server 240 can interact with the user device 230 to provide the additional information to the user device 230 or provider server 220 (e.g., related to television programming in the Bay Area of California, related to U.S. versus foreign television programming). The third party server 240 can identify a particular show (e.g., full video) that the user is likely watching based on the location of the user and channel that the user device 230 is receiving.

[0030] In some embodiments, the video server 210, provider server 220, a user device 230, and third party server 240 can be used to receive portions of a full video in a plurality of video streams (e.g., parallel I/O) at the computing device (e.g., provider server 220). With multiple portions in the full video received (e.g., at the provider server 220), the computing device can create multiple video previews simultaneously (e.g., using multiple encoding techniques).

[0031] In some embodiments, the identification of the user device 230, application, or network that is used to display the video preview can affect the creation of the video preview. For example, a computing device (e.g., provider server 220) can receive an identifier specifying a device (e.g., an Android.RTM. device) for displaying the video preview. The device may be the user device 230 or recipient device of the video preview from the user device (e.g., Apple iPhone.RTM. sending the video preview to an Android.RTM. device). The provider server 220 can create (e.g., encode, compress, transcode) the video preview based on a determined encoding technique (e.g., video codecs including H.264 AVC, MPEG-4 SP, or VP8).

III. Creation of a Video Preview

[0032] A video preview may be generated by a provider server 220, user device 230, or video server 210. In some embodiments, a third party server 240 may generate a video preview using a similar process as a user device 230.

[0033] A. Identifying a Video Preview from a Full Video

[0034] FIG. 3 shows flowchart illustrating a method of identifying a video preview from a full video according to an embodiment of the present invention. For example, a video may begin as a series of frames or images (e.g., raw format) that are encoded by the video server 210 into a full video. The full video may reduce the size of the corresponding file and enable a more efficient transmission of the full video to other devices (e.g., provider server 220, user device 230). In some embodiments, the provider server 220 can transcode the full video (e.g., change the encoding for full video to a different encoding, encoding the full video to the same encoding or re-encoding) in order to generate and transmit the video preview. For example, transcoding may change the start time of a video, duration, or caption information.

[0035] The video server 210 may store and provide a full video. The full video can be received from a user or generated by the computing device and offered to users through a network page. In some embodiments, another computing device (e.g., a user device 230, a third party server 240) can upload the full video to the video server 210.

[0036] At block 310, a request to generate a video of a full video can be received. For example, the request to generate a video preview of a full video can specify a portion of the full video (e.g., the first 10 seconds, the last 15 seconds, the portion of the full video identified by a timestamp). For example, the user device 230 may identify a video portion of the full video by identifying a start/end time, a timestamp in the full video, or other identification provided by the GUI. The information (e.g., start/end time, timestamp) can be transmitted to the provider server 220. In some embodiments, a user device 230 can periodically request to generate a video preview (e.g., every 30 seconds, based on a reoccurring or periodic request).

[0037] The request can include an identification of the video portion or a litany of other information, including a start/end time, link to a full video at the video server 210, timestamp, the user's internet protocol (IP) address, a user-agent string of the browser, cookies, a user's user identifier (ID), and other information. A user-agent string, for example, may include information about a user device 230 in order for the webserver to choose or limit content based on the known capabilities of a particular version of the user device 230 (e.g., client software). The provider server 220 can receive this and other information from the user device 230.

[0038] At block 320, at least a portion of the full video can be received. For example, at least the portion of the full video can be received at the provider server 220 or other computing device. The provider server 220 may request a full video based in part on the information received from the user device 230. For example, the provider server 220 can transmit a request (e.g., email, file, message) to the video server 210 that references the full video (e.g., link, identifier). In some examples, the video server 210 and provider server 220 may be connected through a direct and/or secure connection in order to retrieve the video (e.g., MP4 file, stream, full video portion). The video server 210 may transmit the full video (e.g., file, link) to the provider server 220 in response to the request or link to the full video.

[0039] At block 330, an identifier specifying a device or application can be received. For example, the request from the user device 230 can include an identifier, which is in turn used to request a full video from the video server 210. The identifier can specify the device or application that may be used for displaying the video preview. The identifier can include a user name, device identifier (e.g., electronic serial number (ESN), international mobile equipment identity (IMEI), mobile equipment identifier (MEID), phone number, subscriber identifier (IMSI), device carrier), identification of an application for displaying the video preview (e.g., network page, browser application, operating system).

[0040] The identifier can be received through a variety of methods. For example, the identifier can be received through an application programming interface (API), from a television programming provider, from an information feed specifying the device, application, or network (e.g., from a third party server 240 or user device 230), from metadata, from a user (e.g., passive/active input), or other sources.

[0041] In some embodiments, the request sent to the video server 210 may vary depending on the type video needed for the full video or requested video preview. For example, the full video may be a raw MP4 format (e.g., compressed using advanced audio coding (AAC) encoding, Apple Lossless format). The provider server 220 can determine that the desired format for the user device 230 is a different type of file format (e.g., an animated GIF) and request additional information from the video server 210 in order to transcode the MP4 format to an animated GIF format for the user device 230 (e.g., including the device type, application that will play the video preview, etc.).

[0042] At block 340, an encoding technique can be determined. For example, an encoding technique can be determined based on the identifier specifying the device or application for displaying the video preview. The encoding technique can be used to generate the video preview (e.g., using a portion of the full video). A variety of encoding techniques can be used, including a graphics interchange format (GIF), animated GIF, MP4 container, a H.264 video codec, an advanced audio coding (AAC) audio codec, WebM container, VP8 video codec, an Ogg Vorbis audio codec, or MPEG-4 SP.

[0043] In some embodiments, the encoding technique is dependent on the identification of a type of user device that submitted the request to generate the video preview. In some embodiments, the encoding technique is dependent on the type of application that will display the video preview at the user device.

[0044] Multiple encoding techniques can be used. For example, one encoding technique can be used to generate a first video preview and a second encoding technique can be used to generate a second video preview. The second video preview can be generated to allow the user device to share the second video preview through a particular medium. The first video preview and second video preview can be provided to the user device 230.

[0045] In some embodiments, a palette-based size optimization can be included as an encoding technique. For example, the encoding technique can include a palette-based size optimization by generating a common color palette for the video preview and limiting the video preview to the common color palette (e.g., limiting the images in the video preview to particular colors identified by the common color palette).

[0046] At block 350, a video preview can be created. For example, the provider server can create the video preview from the full video based on the determined encoding technique. In some embodiments, the video preview can be created using a plurality of video streams (e.g., parallel I/O). With multiple portions in the full video received (e.g., at the provider server 220), the computing device can create multiple video previews (e.g., simultaneously, to expedite video preview creation, etc.).

[0047] At block 360, the video preview may be provided. For example, the video preview can be provided to the user device 230. The video preview may be provided using various methods. For example, the video preview can be transmitted via a messaging service to the user device 230, in an attachment to an email, embedded in a short messaging service (SMS) or text message, provided through a GUI accessible by the user device, or other methods. In some embodiments, the video preview (e.g., file, animated GIF, link to a video preview) may be stored in a temporary location (e.g., clipboard, temporary data buffer) at a user device 230 after the video preview is generated. The user may copy/paste the video preview to an email client, SMS, or other application in order to use or share the video with other applications and/or devices.

[0048] The video preview can be provided to the user device 230 in a variety of formats. For example, the video preview can be provided with a link to a stored file on a webserver and/or the provider server 220, an animated GIF file, an MP4 file, or other acceptable file format. In some examples, the video preview can be provided in a format based in part on a particular type of user device 230 (e.g. Apple iPhones can receive a MPEG-4 formatted file, Android machines can receive an AVI formatted file). As illustrated, the user device 230 may provide information (e.g., identifier specifying a device, application, device type, or operating system) to the provider server 220 prior to receiving the properly formatted video preview and the provided video preview can correspond with that information.

[0049] In some embodiments, multiple video previews can be provided. For example, the computing device (e.g., provider server 220) can send multiple video previews to a device (e.g., the clipboard, temporary data buffer). The user may choose to paste the video preview into a particular application (e.g., messaging service, email client) and the properly encoded video preview for that application can be provided to the application. For example, the user may access a GUI provided by the provider server 220 that includes one or more request tools (e.g., buttons, text boxes) to access particularly encoded video previews. When the user selects one of the request tools, the corresponding video preview can be provided to the user. In some examples, the user may provide (e.g., copy/paste) a link to the video preview and the link can direct the user to a properly encoded video preview.

[0050] In some embodiments, the video preview can be provided to the user device 230 identified in a request from the user device. For example, the user device can specify the ultimate device or application that the user device 230 intends to use to display the video preview (e.g., through the use of multiple request tools or buttons in a GUI provided by the provider server 220, through user input). The user can select a request tool in the GUI (e.g., "I want a video preview for an SMS message") and the received video preview can be encoded for the identified use. In another example, the user can select a request tool in the GUI that identifies a social networking platform (e.g., Facebook.RTM., Twitter.RTM., Google+.RTM., Tumblr.RTM.), so that the received video preview can be uploaded directly to the social networking website.

[0051] B. Correlating an Identifier with an Encoding Technique

[0052] In some embodiments, the user device 230 will transmit an identifier specifying a device or an application for displaying the video preview. The identifier can be matched with a list of identifiers at the provider server 220 (e.g., in a database) to find a matching identifier. If the received identifier is identified or found at the provider server 220, the provider server 220 can determine an encoding technique for the user device based on the identifier. Similarly, an application (e.g., or a corresponding device configured to execute the application) can provide an identifier to the provider server 220 that specifies the application for displaying the video preview. If the identifier is found, the provider server can determine an encoding technique for the application based on the identifier.

[0053] Once the encoding technique is determined based on the identifier, other encoding techniques can be determined as well. For example, five encoding techniques can be available and each may correspond with one or more identifiers. The encoding technique associated with the received identifier can be selected and used to start creating the video preview. In some embodiments, one or more of the encoding techniques that do not correspond with the received identifier can also be used to create one or more video previews, including a video preview created from a full video.

[0054] When the identifier is not found or the device/application does not provide an identifier, an encoding technique can still be determined. In some embodiments, a default encoding technique can be selected (e.g., an animated GIF) and provided to the user device 230 or application.

[0055] In some embodiments, multiple encoding techniques can be used. For example, one or more encoding techniques can be used to create a first video preview, one or more encoding techniques can be used to create a second video preview, and so on. The plurality of video previews (including the first and second video previews) can be sent to the same device or application. The device may store the plurality of video previews in a temporary storage (e.g., clipboard or cache). When the user would like to display the video preview, the appropriate video preview can be selected and used for the appropriate device/application (e.g., Firefox.RTM. application receives a video preview using a WebM video container). In some embodiments, a video preview encoded with a preferred or default encoding technique can be sent to the device or application first, followed by other video previews created using different encoding techniques and/or stored at a provider server 220.

[0056] C. Determining an Encoding Technique Based on the Type of User Device

[0057] FIG. 4 shows illustrations of a video preview displayed with various devices according to an embodiment of the present invention. For example, the provided video preview can be displayed on a variety of different user devices 230 in a variety of different formats, including handheld devices 410, laptops 420, televisions 430, game consoles 440, and the like. The video preview 450 can be displayed as a series of images from a full video. In some embodiments, the video preview 450 can include a caption 460, link 470 (e.g., to the full video, to the video preview, to additional information associated with the video preview, to a stored video preview on a video server 210), or other information. In some examples, the video preview may be displayed in a frame object.

[0058] For example, the identifier may specify the device as an Apple.RTM. device (e.g., a handheld device 410 or laptop 420) or a device running an iOS operating system (e.g., iPhone.RTM., iPad.RTM.). In some embodiments, the provider server 220 can generate a video preview and transmit the video preview to an encoding service (e.g., Cloud Video Encoding, Cloud Video Transcoding) or third party server 240. The provider server 220 can receive the properly encoded video preview and provide the video preview to a user device 230 (e.g., so that the video preview plays when activated). In other embodiments, the provider server 220 can generate a video preview (e.g., locally) by using a particular encoding technique. For example, the encoding technique can include a H.264 video codec (e.g., used up to 1080p), 30 frames per second (FPS), High Profile level 4.1 with advanced audio coding low complexity (AAC-LC) audio codec up to 160 Kbps, 48 kHz, stereo audio with .m4v, .mp4, and .mov video container. In another example, the encoding technique can include a MPEG4 video codec up to 2.5 Mbps, 640 by 480 pixels, 30 FPS, Simple Profile with AAC-LC audio codec up to 160 Kbps per channel, 48 kHz, stereo audio with .m4v, .mp4, and .mov video container. In yet another example, the encoding technique can include a Motion JPEG (M-JPEG) up to 35 Mbps, 1280 by 720 pixels, 30 FPS, audio in ulaw, PCM stereo audio with .avi as the video container.

[0059] In another example, the identifier may specify the device as a device that runs an Android.RTM. operating system (e.g., operating on a Samsung.RTM. handheld device 410). The provider server 220 can generate a video preview using a particular encoding technique for this particular device. For example, the encoding technique can include a H.264 video codec with a 3GPP, MPEG-4, or MPEG-TS video container. In another example, the encoding technique can include a VP8 video codec with a WebM (.webm) or Matroska (.mkv) video container. The encoding technique may also include audio codecs, including AAC-LC, HE-AACv1, HE-AACv2, AAC-ELD AMR-NM, AMR-WB, FLAC, MP3, MIDI, Vorbis, or PCM/WAVE. The encoding technique may also include various specifications for video resolution (e.g., 480 by 360 pixels, 320 by 180 pixels), frame rate (e.g., 12 FPS, 30 FPS), video bit rate (e.g., 56 Kbps, 500 Kbps, 2 Mbps), audio channels (e.g., 1 mono or 2 stereo), audio bit rate (e.g., 24 Kbps, 128 Kbps, 192 Kbps), or other specifications.

[0060] In another example, the identifier may specify the device as a television 430. In some embodiments, the provider server 220 can receive a full video from a third party server 240 (e.g., broadcast center, set top box data provider). When an analog television is used (e.g., identified by the identifier that specifies the device), the encoding technique can include a national television system committee (NTSC), phase alternating line (PAL), or sequential color with memory (SECAM) analog encoding. The video preview can be provided using a radio frequency (RF) modulation to modulate the signal onto a very high frequency (VHF) or ultra-high frequency (UHF) carrier. When televisions that run an Android.RTM. operating system (e.g., Google TV set top boxes) or an iOS operating system (e.g., Apple TV), the encoding techniques can be similar to the technique described above (e.g., .mp4 container, H.264 video codec, AAC audio codec, etc.). When a satellite television is used, the encoding technique can include a MPEG video codec to generate the video preview, followed by a MPEG-4 video codec adjusting the size and format of the video preview for the satellite television receiver (e.g., television 430). In some embodiments, the video preview can be encrypted from the provider server 220 and decrypted at the television 430.

[0061] In another example, the identifier may specify the device as a game console 440. In some embodiments, the provider server 220 can receive an identifier specifying the device is a game console. The provider server can also receive a full video from the game console 440 (e.g., a stream of images showing the user interacting as a digital character in a played game to use as the full video). The encoding technique can include PAL, NTSC, animated GIF, MP4 container, a H.264 video codec, an AAC audio codec, WebM container, VP8 video codec, an Ogg Vorbis audio codec, or other encoding techniques supported by the game console. In some embodiments, the game console 440 can provide a video/audio capture application programming interface (API). The provider server 220 can capture the images provided on the game console 440 via the API (e.g., the game play could be the "full video") and create the video preview at the provider server using the images.

[0062] It should be appreciated that the provided encoding techniques are illustrations. Other encoding techniques are available without diverting from the essence of the invention.

[0063] D. Encoding Captions

[0064] The determined encoding technique can also be used to encode captions 460. For example, the video preview may be created from the full video based on the determined encoding technique. The caption may also use the determined encoding technique. For example, the caption may include a dual-layer file (e.g., soft captioning), where each layer is encoded using the encoding technique, so that the caption may be adjusted independently from the video preview (e.g., change language of the text in the caption). The video preview and caption can overlap (e.g., where the caption can be displayed on top of the video preview layer without altering the video preview itself). In another example, the video preview and caption can be transcoded in order to incorporate the caption with the video preview in a single-layered video preview (e.g., caption "burned in" to the video). Additional information about incorporating captions can be found in U.S. patent application Ser. No. ______, entitled "Video Preview Creation with Link" (Attorney Docket 91283-000710US-896497), which is incorporated by reference in its entirety.

[0065] E. Determining an Encoding Technique Based on the Application that Will Display the Video Preview

[0066] FIG. 5 shows illustrations of a video preview displayed in various applications according to an embodiment of the present invention. For example, the environment 500 can comprise a plurality of computing devices, including a provider server 220, one or more user devices 230 (e.g., 530, 540), and an application 550. The provider server 220 can create and provide a plurality of video previews 520, 522, 524 to the other devices and applications in the environment.

[0067] In some embodiments, the identifier may specify that the application is a network browser (e.g., Firefox, Internet Explorer, Chrome). The provider server 220 can create a video preview using a particular encoding technique for the network browser. For example, the encoding technique can include WebM (.webm) video container based on which encoding techniques the application supports. In another example, the provider server 220 can create multiple video previews using multiple encoding techniques (e.g., including an .mp4 video container). The first encoding technique can be provided to the user and other encoding techniques can be used to create video previews for other applications that may also display the video preview. Network browsers (e.g., other than Firefox) may use various encoding techniques, including MP4, animated GIF, Ogg Video files (e.g., file extension .ogv, mime type video/ogg), Theora video codec, and Ogg Vorbis audio codec.

[0068] Other applications may be identified as well. For example, any software application (e.g., "app") or client (e.g., email client) that can be configured to run on a mobile device, smartphone, gaming console, or television. Similar encoding techniques may be implemented with these applications, including GIFs or encoding techniques where video previews will not automatically play. In some examples, audio may be omitted based on constraints of the device or application as well.

[0069] F. Determining an Encoding Technique Based on a Network

[0070] In some embodiments, the encoding technique can be determined based on the network. For example, a provider server 220 or user device 230 can identify that a network is relatively slow. The encoding technique can be determined to generate a smaller video preview (e.g., a tiny GIF) instead of a larger file, so that the user device 230 can receive the video preview significantly quicker. In another example, the identifier may specify that the video preview will be displayed in a messaging service (e.g., SMS, multimedia messaging service (MMS), text message). The provider server 220 can determine the encoding technique that can create a smaller video preview, because the video preview will likely be viewed using a slower network connection.

IV. Compression Using a Common Color Palette

[0071] A video preview or video file may be encoded or compressed using one or more palette-based optimization techniques. For example, the video file can be compressed using a common color palette.

[0072] A. Use of Common Color Palette

[0073] FIG. 6 shows flowchart illustrating a method of generating a video preview using a palette-based optimization technique according to an embodiment of the present invention.

[0074] At block 610, a request to generate a compressed video file can be received. For example, the request can specify at least a portion of a full video to be used in creating the compressed video file. The specified portion of the full video can comprise a plurality of images. In some embodiments, the computing device can receive information associated with the specified portion of the full video (e.g., a timestamp of a location in a full video, a request specifying a portion of a full video, a link to the full video, the full video file, a push notification including the link to the full video).

[0075] The request can include an identification of the full video or a litany of other information, including a start/end time, link to a full video at the video server 210, timestamp, the user's internet protocol (IP) address, a user-agent string of the browser, cookies, a user's user identifier (ID), and other information. A user-agent string, for example, may include information about a user device 230 in order for the webserver to choose or limit content based on the known capabilities of a particular version of the user device 230 (e.g., client software). The provider server 220 can receive this and other information from the user device 230.

[0076] At block 620, a palette-based optimization technique can be determined. The palette-based optimization technique can be used to generate the compressed video file. For example, the palette-based optimization technique can limit the number of colors used to create the compressed video file. In another example, a single color palette can be used for encoding the compressed video file, instead of one color palette for each of the images in the video file.

[0077] At block 630, the plurality of images can be analyzed using the palette-based optimization technique. For example, the analysis can determine at least one common color palette. The plurality of images and common color palette can be used to generate multiple compressed images of the compressed video file. In some examples, a representative image can be chosen (e.g., for a portion of the full video, for a scene, etc.).

[0078] The common color palette can be a single color palette (e.g., a combination of palettes from multiple images) or multiple color palettes (e.g., where one color palette is used for one portion of the images and another color palette is used for another potion of the images). For example, one or more images can be analyzed and the union of the colors in the analyzed images can be used to make a single common color palette. In another example, an image can be chosen as a representative image for each scene. In yet another example, one or more images can be analyzed to identify multiple scenes in the images, e.g., where different scenes involve different objects and/or backgrounds. A common color palette can be generated for each scene. In another example, a common color palette can be generated using the union of the colors in each scene to generate a common color palette for the union of the colors in the scenes. The colors may be aggregated and/or the union of the colors may be used to generate the common color palette.

[0079] At block 640, multiple compressed images can be specified. The multiple compressed images can be generated using the one or more common color palettes. For example, the images from the mountain scene or the person's face can be identified and encoded using one or more common color palettes. In the mountain scene, the multiple compressed images can include a first image from the top of the mountain, a second image from 10-feet down the mountain, and a third image from 20-feet down the mountain. The resultant images may be compressed images that are limited to the defined colors in the palette (e.g., the same color palette can be used for each of the three compressed images, including a color palette that uses four colors out of 256 possible colors).

[0080] A scene may be an image in the full video or video preview that includes a similar background or combination of pixels as one or more other frames in the full video or video preview. The scene can include a different rendered view of the image in the full video or video preview. For example, a full video may include a first scene showing a President and a second scene showing people walking to meet the President. The background or combination of pixels for the first scene may be distinguishable from the second scene.

[0081] In another example, the multiple compressed images can include different scenes. The scenes may be analyzed based on the compressed images that are used to create the scene. For example, the full video can include six compressed images. The first three compressed images can include the President speaking to a group of people and the second three compressed images can include the group of people listening to the President. The full video can pan between the President and the group of people, or simply capture a plurality of images from the President, pause the camera or edit the frames to remove the panning, and capture a plurality of images from the group listening to the President. There may be two common color palettes, including one common color palette for the President (e.g., navy blues, deep reds) and one common color palette for the group of people (e.g., pastel colors).

[0082] In some examples, the specified portion of the full video can be analyzed to determine information about a plurality of scenes in the full video or video preview. For example, before creating the compressed video file, the specified portion of the full video can be analyzed. The analysis can help determine the plurality of scenes in the specified portion of the full video and used to determine a common color palette. The common color palette can be an aggregated combination of the scenes, or multiple common color palettes can be determined for each of the plurality of scenes (e.g., if there are two scenes, then two common color palettes can be determined).

[0083] At block 650, the compressed video file can be created. For example, the compressed video file can be created from the plurality of images of the specified portion of the full video. As illustrated, the compressed video preview can include the two scenes and the common color palette can be generated from each scene in the video preview. In another example, a single common color palette may be generated based on a combination of the plurality of scenes. The multiple compressed images can be rendered using the common color palette when the compressed video file is viewed.

[0084] B. Optimization Techniques

[0085] A variety of optimization techniques are possible, including palette-based optimization. For example, when generating a compressed video file in an animated GIF format, the optimization technique can include generating a common color palette. A single common color palette can be generated for the entire compressed video file (e.g., one palette shared by each of the images or frames identified in the full video). In some examples, a plurality of frames can be analyzed and used to generate a single image. The common color palette can be generated from the single combined image.

[0086] In some examples, a scene analysis can be one type of optimization technique that is used, without generating a common color palette. For example, when a person is speaking into a camera in the full video, the mouth of the person may change throughout the full video, but the rest of the person's face and background around the person may remain constant. The optimization technique can use the same image information for the minimal changing portions of the image instead of storing new image information that is substantially the same as the rest of the image information (e.g., using a cinemagraph generator). In some embodiments, the scene analysis may consider which portions of the image are static or dynamic through user input using a graphical user interface. For example, with a "brush"-like tool, the user can click and drag over the areas that are to remain dynamic.

[0087] In an embodiment, the full video may be encoded to an animated GIF using indexed color. For example, with indexed color, the color information for the animated GIF may not be directly stored with image pixel data. Instead, the color information can be stored in an array of color elements that defines the particular color called a palette.

[0088] In a standard animated GIF, as much as one palette for every frame can be specified. In some embodiments, the palette-based optimization technique can limit the number of palettes used for the frames. For example, a common color palette can be generated for a plurality of images in the full video (e.g., the portion of the full video that displays a mountain scene with similar colors, the portion of the full video that displays a person's face in the center of the frame as they walk through a city). In another example, a common color palette can be generated for a plurality of images using default color specifications (e.g., red-green-blue, black/white, a limited range of red- and blue-tones, etc.). In another example, multiple common color palettes can be generated for one compressed video file, such that one or more common color palettes are used for one portion of the full video, one or more common color palettes are used for a second portion of the full video, and so on.

[0089] When a single common color palette is used, the color can be selected using various methods. For example, a plurality of images (e.g., four frames) can be selected that contain the largest file size. The largest frames may, in some embodiments, identify the most colors so that the common color palette identifies several colors (e.g., a Mardi Gras scene versus a snow storm scene). In another example, plurality of images can be selected that are selected periodically (e.g., minutes 1, 2, and 3 in a 4 minute full video, in the portion of the full video, in the compressed video file, in the entire full video). The images that are selected periodically can identify a broad representation of the colors in the image (e.g., assuming that the scene will change as the video progresses).

[0090] In another embodiment, multiple common color palettes (e.g., palette clusters) can be identified. For example, in a full video where the camera cuts back and forth between two people having a conversation, the colors associated with each person may vary. A common color palette can be generated for each scene. The common color palettes for each scene can beneficially reduce file size from the original full video and provide better quality compressed video file than a single common color palette with multiple scenes.

[0091] FIG. 7 shows an illustration of a common color palette according to an embodiment of the present invention. For example, the illustration shows a 2-bit indexed image 710 where each pixel 720 is represented by a number/index, and an image 730, where each number/index corresponds with a color 740. Each pixel may correspond with some value in the color palette (e.g., 0 and 1 in the illustration corresponds with black and white, respectively). In some optimization techniques, the image can be encoded in a similar method as shown in FIG. 7. The color information may not be directly associated with image pixel data (e.g., image pixel [0,0] is Red-100), but can be stored in a separate piece of data called a color palette. The color palette may be an array of color elements, in which each element (e.g., a color) is indexed by its position within the array. The image pixels may not contain the full specification of its color, but can potentially contain its index in the palette. Once the color palette is generated (e.g., bitmap corresponding with the 2-bit indexed image 710 on the top of FIG. 7), the image 730 can be formed using the color palette (e.g., the checkerboard image on the bottom of FIG. 7). The image 730 can result in a close representation of an original image (e.g., and video preview) that uses less memory or storage.

[0092] In some embodiments, a pixel 720 is associated with a corresponding color 740 in a color palette. For example, pixel [0,0] can be associated with neon green. As discussed, several pixels are used to create an image or frame, and then several images or frames are used to generate the video preview. As illustrated, pixel 720 corresponds with a black color 740 and the pixel next to 720 corresponds with a white color. The common color palette can include only black and white because black and white are the only colors in this image or frame. The other images or frames of the video preview (e.g., 100 other frames or images) can be created using only black and white, so that when all the images or frames that use the common color palette are sequentially ordered to form the video preview, the video preview will comprise the colors in the common color palette. The reduced number of colors that are stored in a color palette (e.g., fewer numbers associated with colors, fewer colors associated with images/frames, a reduced number of colors, etc.) can result in a reduced size in memory or storage for storing the color palette.

[0093] Depending on the optimization technique used, the common color palette can be generated from one or more images that contain at least a specified file size. The specified file size can be above a certain threshold (e.g., an image that is above 1-kilobyte (1 k)) or include the maximum file size when compared with other images in the full video (e.g., the first image is 1 k, the second image is 1.5 k, the third image is 2 k, so the specified file size is 2 k and the common color palette can be generated using the third image). The specified file size can be retrieved (e.g., the threshold or maximum file size can be retrieved from a provider server 220 or user device 230), dynamically determined (e.g., when the request to generate a compressed video file is received), and/or specified by a user operating a user device 230.

[0094] A compression can be combined with an optimization technique to further optimize the video preview (e.g., to take advantage of a region of pixels with the same color). For example, the left-half of the image may be black (e.g., a black building, a night image in a video preview, etc.). The values associated with the pixels in the image showing the black portion can be compressed by storing one value instead of many. The one value may be the same or similar for each of those pixels on the left-half of the image, so the compression can store the single color. The pixels in the left-half of the image can reference the single stored color. In another example, the image may contain a frame around the image. The color of the frame can be stored as one color and each of the pixels or portions of the image that are used to create the frame can reference the one color.

V. Example Subsystems and Components

[0095] Any of the clients or servers may utilize any suitable number of subsystems. Examples of such subsystems or components are shown in FIG. 8. The subsystems shown in FIG. 8 are interconnected via a system bus 875. Additional subsystems such as a printer 874, keyboard 878, fixed disk 879, monitor 876, which is coupled to display adapter 882, and others are shown. Peripherals and input/output (I/O) devices, which couple to I/O controller 871, can be connected to the computer system by any number of means known in the art, such as input/output (I/O) port 877 (e.g., USB, FireWire.RTM.). For example, I/O port 877 or external interface 881 (e.g. Ethernet, Wi-Fi, etc.) can be used to connect the computer apparatus to a wide area network such as the Internet, a mouse input device, or a scanner. The interconnection via system bus allows the central processor 873, which may include one or more processors, to communicate with each subsystem and to control the execution of instructions from system memory 872 or the fixed disk 879 (such as a hard drive or optical disk), as well as the exchange of information between subsystems. The system memory 872 and/or the fixed disk 879 may embody a computer readable medium. Any of the data mentioned herein can be output from one component to another component and can be output to the user.

[0096] It should be understood that any of the embodiments of the present invention can be implemented in the form of control logic using hardware (e.g. an application specific integrated circuit or field programmable gate array) and/or using computer software with a generally programmable processor in a modular or integrated manner. As user herein, a processor includes a multi-core processor on a same integrated chip, or multiple processing units on a single circuit board or networked. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will know and appreciate other ways and/or methods to implement embodiments of the present invention using hardware and a combination of hardware and software.

[0097] Any of the software components or functions described in this application may be implemented as software code to be executed by a processor using any suitable computer language such as, for example, Java.RTM., C++ or Perl using, for example, conventional or object-oriented techniques. The software code may be stored as a series of instructions or commands on a computer readable medium for storage and/or transmission, suitable media include random access memory (RAM), a read only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like. The computer readable medium may be any combination of such storage or transmission devices.

[0098] Such programs may also be encoded and transmitted using carrier signals adapted for transmission via wired, optical, and/or wireless networks conforming to a variety of protocols, including the Internet. As such, a computer readable medium according to an embodiment of the present invention may be created using a data signal encoded with such programs. Computer readable media encoded with the program code may be packaged with a compatible device or provided separately from other devices (e.g., via Internet download). Any such computer readable medium may reside on or within a single computer program product (e.g. a hard drive, a CD, or an entire computer system), and may be present on or within different computer program products within a system or network. A computer system may include a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user.

[0099] Any of the methods described herein may be totally or partially performed with a computer system including one or more processors, which can be configured to perform the steps. Thus, embodiments can be directed to computer systems configured to perform the steps of any of the methods described herein, potentially with different components performing a respective steps or a respective group of steps. Although presented as numbered steps, steps of methods herein can be performed at a same time or in a different order. Additionally, portions of these steps may be used with portions of other steps from other methods. Also, all or portions of a step may be optional. Additionally, any of the steps of any of the methods can be performed with modules, circuits, or other means for performing these steps.

[0100] The specific details of particular embodiments may be combined in any suitable manner without departing from the spirit and scope of embodiments of the invention. However, other embodiments of the invention may be directed to specific embodiments relating to each individual aspect, or specific combinations of these individual aspects.

[0101] The above description of exemplary embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form described, and many modifications and variations are possible in light of the teaching above. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated.

[0102] A recitation of "a", "an" or "the" is intended to mean "one or more" unless specifically indicated to the contrary.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed