Graphics Security With Synergistic Encryption, Content-based And Resource Management Technology

Zage; David ;   et al.

Patent Application Summary

U.S. patent application number 17/133336 was filed with the patent office on 2022-05-05 for graphics security with synergistic encryption, content-based and resource management technology. This patent application is currently assigned to Intel Corporation. The applicant listed for this patent is Intel Corporation. Invention is credited to Aravindh Anantaraman, Omer Ben-Shalom, Julien Carreno, Siddhartha Chhabra, David Cowperthwaite, Scott Janus, Vidhya Krishnan, Tomer Levy, Aditya Navale, Alex Nayshtut, Rajesh Poornachandran, David Puffer, Xiaoyu Ruan, Ankur Shah, Vedvyas Shanbhogue, Ronald Silvas, Ned M. Smith, David Zage.

Application Number20220138286 17/133336
Document ID /
Family ID1000005372949
Filed Date2022-05-05

United States Patent Application 20220138286
Kind Code A1
Zage; David ;   et al. May 5, 2022

GRAPHICS SECURITY WITH SYNERGISTIC ENCRYPTION, CONTENT-BASED AND RESOURCE MANAGEMENT TECHNOLOGY

Abstract

Systems, apparatuses and methods may provide for encryption based technology. Data may be encrypted locally with a graphics processor with encryption engines. The graphics processor components may be verified with a root-of-trust and based on collection of claims. The graphics processor may further be able to modify encrypted data from a non-pageable format to a pageable format. The graphics processor may further process data associated with a virtual machine based on a key that is known by the virtual machine and the graphics processor.


Inventors: Zage; David; (Livermore, CA) ; Janus; Scott; (Loomis, CA) ; Smith; Ned M.; (Beaverton, OR) ; Krishnan; Vidhya; (Folsom, CA) ; Chhabra; Siddhartha; (Portland, OR) ; Poornachandran; Rajesh; (Portland, OR) ; Levy; Tomer; (Tel Aviv, IL) ; Carreno; Julien; (El Dorado Hills, CA) ; Shah; Ankur; (Folsom, CA) ; Silvas; Ronald; (Sacramento, CA) ; Anantaraman; Aravindh; (Folsom, CA) ; Puffer; David; (Tempe, AZ) ; Shanbhogue; Vedvyas; (Austin, TX) ; Cowperthwaite; David; (Portland, OR) ; Navale; Aditya; (Folsom, CA) ; Ben-Shalom; Omer; (Rishon Le-Tzion, IL) ; Nayshtut; Alex; (Gan Yavne D, IL) ; Ruan; Xiaoyu; (Folsom, CA)
Applicant:
Name City State Country Type

Intel Corporation

Santa Clara

CA

US
Assignee: Intel Corporation
Santa Clara
CA

Family ID: 1000005372949
Appl. No.: 17/133336
Filed: December 23, 2020

Related U.S. Patent Documents

Application Number Filing Date Patent Number
63108691 Nov 2, 2020

Current U.S. Class: 726/26
Current CPC Class: G06F 21/105 20130101; G06T 1/20 20130101; G06F 9/45558 20130101; G06F 21/602 20130101; G06F 2009/45587 20130101
International Class: G06F 21/10 20060101 G06F021/10; G06F 21/60 20060101 G06F021/60; G06T 1/20 20060101 G06T001/20; G06F 9/455 20060101 G06F009/455

Claims



1. A graphics processor comprising: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is implemented in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to: identify confidential data to be rendered, wherein the confidential data is to be associated with a virtual machine; encrypt the confidential data according to a first encryption key to generate encrypted confidential data; cause the encrypted confidential data to be stored in a first buffer; and decrypt the encrypted confidential data to generate decrypted confidential information.

2. The graphics processor of claim 1, wherein the logic coupled to the one or more substrates is to: receive the first encryption key from a trusted execution environment.

3. The graphics processor of claim 2, wherein first encryption key is to be a private symmetric digital rights management (DRM) session key.

4. The graphics processor of claim 1, wherein the logic coupled to the one or more substrates is to: generate the first encryption key.

5. The graphics processor of claim 1, wherein the logic coupled to the one or more substrates is to: composite the decrypted confidential data with application data to generate composited confidential and application data, wherein the application data is to be associated with one or more applications to be executed on a host operating system; encrypt the composited confidential and application data according to a second encryption key to generate encrypted composited confidential and application data, wherein the second key is to be different from the first key; and store the encrypted composited confidential and application data in a second buffer that is to be different than the first buffer.

6. The graphics processor of claim 5, wherein the logic coupled to the one or more substrates is to: in response to an identification that the encrypted composited confidential and application data is to be displayed, decrypt the encrypted composited confidential and application data according to the second key.

7. A semiconductor apparatus comprising: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is implemented in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to: transmit, with a first target environment of a plurality of target environments of a graphics processor, first key seeds to compute engines of the graphics processor; collect claims, with the first target environment, from the compute engines to generate evidence; and generate, with the compute engines, unique identity keys for each of the compute engines based on the first key seeds.

8. The apparatus of claim 7, wherein the logic coupled to the one or more substrates is to: transmit, with the plurality of target environments, second key seeds to each other.

9. The apparatus of claim 8, wherein the logic coupled to the one or more substrates is to: generate, with the plurality of target environments, unique identity keys based on the second key seeds.

10. The apparatus of claim 9, wherein the logic coupled to the one or more substrates is to: collect, with the plurality of target environments, claims of the plurality of target environments; and generate evidence for attestation based on the claims of the plurality of target environments.

11. The apparatus of any one of claim 7, wherein the logic coupled to the one or more substrates is to: generate, with a RoT hardware of the graphics processor, a key seed for a second target environment of the plurality of target environments.

12. The apparatus of claim 11, wherein the logic coupled to the one or more substrates is to: collect claims, with the RoT hardware, from the second target environment; and generate, with the RoT hardware, evidence based on the claims collected from the second target environment.

13. At least one computer readable storage medium comprising a set of instructions, which when executed by a computing device, cause the computing device to: identify that first data is to be in a first format, wherein the first format is to be a physical address based encryption format; convert, with a graphics processor, the first data from the first format to a second format, wherein the second format is to be a physical address agnostic encryption format; and page-out the first data, that is to be in the second format, from a memory to a non-volatile storage.

14. The at least one computer readable storage medium of claim 13, wherein the instructions, when executed, cause the computing device to: increment a global counter in response to an identification that the first data is to be paged-out.

15. The at least one computer readable storage medium of claim 13, wherein the instructions, when executed, cause the computing device to: generate a message authentication code (MAC) value based on the first data that is to be in the second format.

16. The at least one computer readable storage medium of claim 15, wherein the instructions, when executed, cause the computing device to: store the MAC value in a protected memory.

17. The at least one computer readable storage medium of claim 13, wherein the instructions, when executed, cause the computing device to: page-in second data from a storage; calculate a message authentication code (MAC) value based on the second data; and compare the MAC value of the second data to a MAC value of the first data to determine whether the second data is to correspond to the first data.

18. The at least one computer readable storage medium of claim 17, wherein the instructions, when executed, cause the computing device to: execute one or more operations based on the second data when the MAC value of the second data being the same as the MAC value of the first data; and bypass one or more operations based on the second data when the MAC value of the second data being dissimilar from the MAC value of the first data.

19. A computing system comprising: a data storage; a host processor; a plurality of accelerators that are to be divided into a first trust domain and a second trust domain, wherein the plurality of accelerators are to include a graphics processor; and a converged cryptographic engine (CCE) implemented at least partly in one or more of configurable logic or fixed-functionality logic hardware; and a memory including a set of instructions, which when executed by one or more of the graphics processor or the host processor, cause the computing system to: partition a plurality of encryption keys between the first trust domain and the second trust domain so that first encryption keys of the plurality of encryption keys are assigned to the first trust domain, and second encryption keys of the plurality of encryption keys are assigned to the second trust domain; and encrypt, with the CCE, data according to the first encryption keys or the second encryption keys based on whether the data is to originate from the first trust domain or the second trust domain.

20. The computing system of claim 19, wherein the instructions, when executed, cause the computing system to: identify, with the CCE, that a first data write is to originate from the first trust domain; and encrypt, with the CCE, data associated with the first data write with a key of the first encryption keys.

21. The computing system of claim 20, wherein the instructions, when executed, cause the computing system to: identify, with the CCE, that a second data write is to originate from the second trust domain; and encrypt, with the CCE, data associated with the second data write with a key of the second encryption keys.

22. The computing system of claim 19, wherein the instructions, when executed, cause the computing system to: block the host processor from accessing the first encryption keys and the second encryption keys.

23. The computing system of claim 19, wherein the instructions, when executed, cause the computing system to: store the encrypted data in the data storage.

24. The computing system of claim 19, wherein the CCE is be in a memory path between the first and second trust domains and the data storage.
Description



CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of priority to U.S. Provisional Patent Application No. 63/108,691 filed on Nov. 2, 2020.

TECHNICAL FIELD

[0002] This disclosure relates generally to data processing and more particularly to data processing via a general-purpose graphics processing unit (GPU).

BACKGROUND

[0003] Current parallel graphics data processing includes systems and methods developed to perform specific operations on graphics data such as, for example, linear interpolation, tessellation, rasterization, texture mapping, depth testing, etc. Traditionally, graphics processors used fixed function computational units to process graphics data; however, more recently, portions of graphics processors have been made programmable, enabling such processors to support a wider variety of operations for processing vertex and fragment data.

[0004] To further increase performance, graphics processors typically implement processing techniques such as pipelining that attempt to process, in parallel, as much graphics data as possible throughout the different parts of the graphics pipeline. Parallel graphics processors with single instruction, multiple thread (SIMT) architectures are designed to maximize the amount of parallel processing in the graphics pipeline. In an SIMT architecture, groups of parallel threads attempt to execute program instructions synchronously together as often as possible to increase processing efficiency. A general overview of software and hardware for SIMT architectures can be found in Shane Cook, CUDA Programming Chapter 3, pages 37-51 (2013).

BRIEF DESCRIPTION OF THE DRAWINGS

[0005] The various advantages of the embodiments will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:

[0006] FIG. 1 is a block diagram illustrating a computer system configured to implement one or more aspects of the embodiments described herein;

[0007] FIGS. 2A-2D illustrate parallel processor components;

[0008] FIGS. 3A-3C are block diagrams of graphics multiprocessors and multiprocessor-based GPUs;

[0009] FIGS. 4A-4F illustrate an exemplary architecture in which a plurality of GPUs is communicatively coupled to a plurality of multi-core processors;

[0010] FIG. 5 illustrates a graphics processing pipeline;

[0011] FIGS. 6A and 6B illustrate a process of an example of a granular, lane-specific encryption and decryption process according to an embodiment;

[0012] FIG. 6C is a block diagram of a SIMD architecture according to an embodiment;

[0013] FIG. 6D is a flowchart of an example of a method of granular encryption and decryption according to an embodiment;

[0014] FIG. 6E is a flowchart of an example of a method of processing read requests according to an embodiment;

[0015] FIG. 6F is a process of an example of an encryption and storage process according to an embodiment;

[0016] FIG. 6G is a block diagram of a performance-enhanced computing architecture according to an embodiment;

[0017] FIG. 6H is a block diagram of an entry according to an embodiment;

[0018] FIG. 6I is a flowchart of an example of a method of entering data into a ledger according to an embodiment;

[0019] FIG. 7A is an exemplary architecture in which a converged cryptographic engine executes encryption and decryption according to an embodiment;

[0020] FIG. 7B is a flowchart of an example of a method of encrypting data and decrypting data according to various trust domains according to an embodiment;

[0021] FIG. 7C is a flowchart of an example of a method of a granular encryption scheme according to various trust domains according to an embodiment;

[0022] FIG. 7D is a flowchart of an example of a method of encrypting data from a same accelerator and/or CPU with different keys according to an embodiment;

[0023] FIG. 7E is a process of an example of a granular encryption process according to an embodiment;

[0024] FIG. 7F is a flowchart of an example of a method of decrypting data with a GPU according to an embodiment;

[0025] FIG. 7G is a process of an example of an encryption and decryption process according to an embodiment;

[0026] FIG. 7H is a process of an example of cryptographic cache with a cryptographic diffusion and confusion according to an embodiment;

[0027] FIG. 7I is a diagram of an example of a resources diagram according to an embodiment;

[0028] FIG. 8A is a block diagram of an example of a tenant-based processing environment according to an embodiment;

[0029] FIG. 8B is a block diagram of an example of a graphics processing unit architecture according to an embodiment;

[0030] FIG. 8C is a process of an example of securing trust between a tenant and a graphics processing unit according to an embodiment;

[0031] FIG. 8D is a flowchart of an example of a method of securely attesting according to an embodiment;

[0032] FIG. 9A is a block diagram of an example of a software-accelerated, confidential, security enhanced computing architecture according to an embodiment;

[0033] FIG. 9B is a block diagram of an example of hardware-accelerated, confidential security enhanced computing architecture according to an embodiment;

[0034] FIG. 9C is a flowchart of an example of a method of securely transferring data from a guest OS according to an embodiment;

[0035] FIG. 9D is a flowchart of an example of a method of securely handling data according to an embodiment;

[0036] FIG. 9E is a block diagram of an example of an encryption conversion scheme with a paging process according to an embodiment;

[0037] FIG. 9F is a flowchart of an example of a method of handling paging operations securely according to an embodiment;

[0038] FIG. 9G is a flowchart of an example of a method of paging data according to an embodiment;

[0039] FIG. 10 is a block diagram of an example of a processing system according to an embodiment;

[0040] FIGS. 11A-11D are block diagrams of examples of computing systems and graphics processors according to embodiments;

[0041] FIGS. 12A-12C are block diagrams of examples of additional graphics processor and compute accelerator architectures according to embodiments;

[0042] FIG. 13 is a block diagram of an example of a graphics processing engine of a graphics processor according to an embodiment;

[0043] FIGS. 14A-14B is a block diagram of an example of thread execution logic of a graphics processor core according to an embodiment;

[0044] FIG. 15 illustrates an example of an additional execution unit according to an embodiment;

[0045] FIG. 16 is a block diagram illustrating an example of a graphics processor instruction formats according to an embodiment;

[0046] FIG. 17 is a block diagram of another example of a graphics processor according to an embodiment;

[0047] FIG. 18A is a block diagram illustrating an example of a graphics processor command format according to an embodiment;

[0048] FIG. 18B is a block diagram illustrating an example of a graphics processor command sequence according to an embodiment;

[0049] FIG. 19 illustrates an example graphics software architecture for a data processing system according to an embodiment;

[0050] FIG. 20A is a block diagram illustrating an example of an IP core development system according to an embodiment;

[0051] FIG. 20B illustrates an example of a cross-section side view of an integrated circuit package assembly according to an embodiment;

[0052] FIGS. 20C-20D illustrates examples of package assemblies according to an embodiment;

[0053] FIG. 21 is a block diagram illustrating an example of a system on a chip integrated circuit according to an embodiment; and

[0054] FIGS. 22A-22B are block diagrams illustrating exemplary graphics processors for use within an SoC, according to embodiments.

DESCRIPTION OF EMBODIMENTS

[0055] A graphics processing unit (GPU) is communicatively coupled to host/processor cores to accelerate, for example, graphics operations, machine-learning operations, pattern analysis operations, and/or various general-purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or another interconnect (e.g., a high-speed interconnect such as PCIe or NVLink). Alternatively, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions.

[0056] In the following description, numerous specific details are set forth to provide a more thorough understanding. However, it will be apparent to one of skill in the art that the embodiments described herein may be practiced without one or more of these specific details. In other instances, well-known features have not been described to avoid obscuring the details of the present embodiments.

[0057] System Overview

[0058] FIG. 1 is a block diagram illustrating a computing system 100 configured to implement one or more aspects of the embodiments described herein. The computing system 100 includes a processing subsystem 101 having one or more processor(s) 102 and a system memory 104 communicating via an interconnection path that may include a memory hub 105. The memory hub 105 may be a separate component within a chipset component or may be integrated within the one or more processor(s) 102. The memory hub 105 couples with an I/O subsystem 111 via a communication link 106. The I/O subsystem 111 includes an I/O hub 107 that can enable the computing system 100 to receive input from one or more input device(s) 108. Additionally, the I/O hub 107 can enable a display controller, which may be included in the one or more processor(s) 102, to provide outputs to one or more display device(s) 110A. In one embodiment the one or more display device(s) 110A coupled with the I/O hub 107 can include a local, internal, or embedded display device.

[0059] The processing subsystem 101, for example, includes one or more parallel processor(s) 112 coupled to memory hub 105 via a bus or other communication link 113. The communication link 113 may be one of any number of standards-based communication link technologies or protocols, such as, but not limited to PCI Express, or may be a vendor specific communications interface or communications fabric. The one or more parallel processor(s) 112 may form a computationally focused parallel or vector processing system that can include a large number of processing cores and/or processing clusters, such as a many integrated core (MIC) processor. For example, the one or more parallel processor(s) 112 form a graphics processing subsystem that can output pixels to one of the one or more display device(s) 110A coupled via the I/O Hub 107. The one or more parallel processor(s) 112 can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s) 110B.

[0060] Within the I/O subsystem 111, a system storage unit 114 can connect to the I/O hub 107 to provide a storage mechanism for the computing system 100. An I/O switch 116 can be used to provide an interface mechanism to enable connections between the I/O hub 107 and other components, such as a network adapter 118 and/or wireless network adapter 119 that may be integrated into the platform, and various other devices that can be added via one or more add-in device(s) 120. The add-in device(s) 120 may also include, for example, one or more external graphics processor devices and/or compute accelerators. The network adapter 118 can be an Ethernet adapter or another wired network adapter. The wireless network adapter 119 can include one or more of a Wi-Fi, Bluetooth, near field communication (NFC), or other network device that includes one or more wireless radios.

[0061] The computing system 100 can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and the like, may also be connected to the I/O hub 107. Communication paths interconnecting the various components in FIG. 1 may be implemented using any suitable protocols, such as PCI (Peripheral Component Interconnect) based protocols (e.g., PCI-Express), or any other bus or point-to-point communication interfaces and/or protocol(s), such as the NVLink high-speed interconnect, or interconnect protocols known in the art.

[0062] The one or more parallel processor(s) 112 may incorporate circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). Alternatively or additionally, the one or more parallel processor(s) 112 can incorporate circuitry optimized for general purpose processing, while preserving the underlying computational architecture, described in greater detail herein. Components of the computing system 100 may be integrated with one or more other system elements on a single integrated circuit. For example, the one or more parallel processor(s) 112, memory hub 105, processor(s) 102, and I/O hub 107 can be integrated into a system on chip (SoC) integrated circuit. Alternatively, the components of the computing system 100 can be integrated into a single package to form a system in package (SIP) configuration. In one embodiment at least a portion of the components of the computing system 100 can be integrated into a multi-chip module (MCM), which can be interconnected with other multi-chip modules into a modular computing system.

[0063] It will be appreciated that the computing system 100 shown herein is illustrative and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of processor(s) 102, and the number of parallel processor(s) 112, may be modified as desired. For instance, system memory 104 can be connected to the processor(s) 102 directly rather than through a bridge, while other devices communicate with system memory 104 via the memory hub 105 and the processor(s) 102. In other alternative topologies, the parallel processor(s) 112 are connected to the I/O hub 107 or directly to one of the one or more processor(s) 102, rather than to the memory hub 105. In other embodiments, the I/O hub 107 and memory hub 105 may be integrated into a single chip. It is also possible that two or more sets of processor(s) 102 are attached via multiple sockets, which can couple with two or more instances of the parallel processor(s) 112.

[0064] Some of the particular components shown herein are optional and may not be included in all implementations of the computing system 100. For example, any number of add-in cards or peripherals may be supported, or some components may be eliminated. Furthermore, some architectures may use different terminology for components similar to those illustrated in FIG. 1. For example, the memory hub 105 may be referred to as a Northbridge in some architectures, while the I/O hub 107 may be referred to as a Southbridge.

[0065] FIG. 2A illustrates a parallel processor 200. The parallel processor 200 may be a GPU, GPGPU or the like as described herein. The various components of the parallel processor 200 may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or field programmable gate arrays (FPGA). The illustrated parallel processor 200 may be the, or one of the parallel processor(s) 112 shown in FIG. 1.

[0066] The parallel processor 200 includes a parallel processing unit 202. The parallel processing unit includes an I/O unit 204 that enables communication with other devices, including other instances of the parallel processing unit 202. The I/O unit 204 may be directly connected to other devices. For instance, the I/O unit 204 connects with other devices via the use of a hub or switch interface, such as memory hub 105. The connections between the memory hub 105 and the I/O unit 204 form a communication link 113. Within the parallel processing unit 202, the I/O unit 204 connects with a host interface 206 and a memory crossbar 216, where the host interface 206 receives commands directed to performing processing operations and the memory crossbar 216 receives commands directed to performing memory operations.

[0067] When the host interface 206 receives a command buffer via the I/O unit 204, the host interface 206 can direct work operations to perform those commands to a front end 208. In one embodiment the front end 208 couples with a scheduler 210, which is configured to distribute commands or other work items to a processing cluster array 212. The scheduler 210 ensures that the processing cluster array 212 is properly configured and in a valid state before tasks are distributed to the processing clusters of the processing cluster array 212. The scheduler 210 may be implemented via firmware logic executing on a microcontroller. The microcontroller implemented scheduler 210 is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on the processing array 212. Preferably, the host software can prove workloads for scheduling on the processing array 212 via one of multiple graphics processing doorbells. The workloads can then be automatically distributed across the processing array 212 by the scheduler 210 logic within the scheduler microcontroller.

[0068] The processing cluster array 212 can include up to "N" processing clusters (e.g., cluster 214A, cluster 214B, through cluster 214N). Each cluster 214A-214N of the processing cluster array 212 can execute a large number of concurrent threads. The scheduler 210 can allocate work to the clusters 214A-214N of the processing cluster array 212 using various scheduling and/or work distribution algorithms, which may vary depending on the workload arising for each type of program or computation. The scheduling can be handled dynamically by the scheduler 210, or can be assisted in part by compiler logic during compilation of program logic configured for execution by the processing cluster array 212. Optionally, different clusters 214A-214N of the processing cluster array 212 can be allocated for processing different types of programs or for performing different types of computations.

[0069] The processing cluster array 212 can be configured to perform various types of parallel processing operations. For example, the cluster array 212 is configured to perform general-purpose parallel compute operations. For example, the processing cluster array 212 can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing data transformations.

[0070] The processing cluster array 212 is configured to perform parallel graphics processing operations. In such embodiments in which the parallel processor 200 is configured to perform graphics processing operations, the processing cluster array 212 can include additional logic to support the execution of such graphics processing operations, including, but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. Additionally, the processing cluster array 212 can be configured to execute graphics processing related shader programs such as, but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. The parallel processing unit 202 can transfer data from system memory via the I/O unit 204 for processing. During processing the transferred data can be stored to on-chip memory (e.g., parallel processor memory 222) during processing, then written back to system memory.

[0071] In embodiments in which the parallel processing unit 202 is used to perform graphics processing, the scheduler 210 may be configured to divide the processing workload into approximately equal sized tasks, to better enable distribution of the graphics processing operations to multiple clusters 214A-214N of the processing cluster array 212. In some of these embodiments, portions of the processing cluster array 212 can be configured to perform different types of processing. For example a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters 214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters 214A-214N for further processing.

[0072] During operation, the processing cluster array 212 can receive processing tasks to be executed via the scheduler 210, which receives commands defining processing tasks from front end 208. For graphics processing operations, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how the data is to be processed (e.g., what program is to be executed). The scheduler 210 may be configured to fetch the indices corresponding to the tasks or may receive the indices from the front end 208. The front end 208 can be configured to ensure the processing cluster array 212 is configured to a valid state before the workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated.

[0073] Each of the one or more instances of the parallel processing unit 202 can couple with parallel processor memory 222. The parallel processor memory 222 can be accessed via the memory crossbar 216, which can receive memory requests from the processing cluster array 212 as well as the I/O unit 204. The memory crossbar 216 can access the parallel processor memory 222 via a memory interface 218. The memory interface 218 can include multiple partition units (e.g., partition unit 220A, partition unit 220B, through partition unit 220N) that can each couple to a portion (e.g., memory unit) of parallel processor memory 222. The number of partition units 220A-220N may be configured to be equal to the number of memory units, such that a first partition unit 220A has a corresponding first memory unit 224A, a second partition unit 220B has a corresponding memory unit 224B, and an Nth partition unit 220N has a corresponding Nth memory unit 224N. In other embodiments, the number of partition units 220A-220N may not be equal to the number of memory devices.

[0074] The memory units 224A-224N can include various types of memory devices, including dynamic random-access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. Optionally, the memory units 224A-224N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM). Persons skilled in the art will appreciate that the specific implementation of the memory units 224A-224N can vary, and can be selected from one of various conventional designs. Render targets, such as frame buffers or texture maps may be stored across the memory units 224A-224N, allowing partition units 220A-220N to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processor memory 222. In some embodiments, a local instance of the parallel processor memory 222 may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory.

[0075] Optionally, any one of the clusters 214A-214N of the processing cluster array 212 has the ability to process data that will be written to any of the memory units 224A-224N within parallel processor memory 222. The memory crossbar 216 can be configured to transfer the output of each cluster 214A-214N to any partition unit 220A-220N or to another cluster 214A-214N, which can perform additional processing operations on the output. Each cluster 214A-214N can communicate with the memory interface 218 through the memory crossbar 216 to read from or write to various external memory devices. In one of the embodiments with the memory crossbar 216 the memory crossbar 216 has a connection to the memory interface 218 to communicate with the I/O unit 204, as well as a connection to a local instance of the parallel processor memory 222, enabling the processing units within the different processing clusters 214A-214N to communicate with system memory or other memory that is not local to the parallel processing unit 202. Generally, the memory crossbar 216 may, for example, by able to use virtual channels to separate traffic streams between the clusters 214A-214N and the partition units 220A-220N.

[0076] While a single instance of the parallel processing unit 202 is illustrated within the parallel processor 200, any number of instances of the parallel processing unit 202 can be included. For example, multiple instances of the parallel processing unit 202 can be provided on a single add-in card, or multiple add-in cards can be interconnected. The different instances of the parallel processing unit 202 can be configured to inter-operate even if the different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. Optionally, some instances of the parallel processing unit 202 can include higher precision floating point units relative to other instances. Systems incorporating one or more instances of the parallel processing unit 202 or the parallel processor 200 can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems.

[0077] FIG. 2B is a block diagram of a partition unit 220. The partition unit 220 may be an instance of one of the partition units 220A-220N of FIG. 2A. As illustrated, the partition unit 220 includes an L2 cache 221, a frame buffer interface 225, and a ROP 226 (raster operations unit). The L2 cache 221 is a read/write cache that is configured to perform load and store operations received from the memory crossbar 216 and ROP 226. Read misses and urgent write-back requests are output by L2 cache 221 to frame buffer interface 225 for processing. Updates can also be sent to the frame buffer via the frame buffer interface 225 for processing. In one embodiment the frame buffer interface 225 interfaces with one of the memory units in parallel processor memory, such as the memory units 224A-224N of FIG. 2A (e.g., within parallel processor memory 222). The partition unit 220 may additionally or alternatively also interface with one of the memory units in parallel processor memory via a memory controller (not shown).

[0078] In graphics applications, the ROP 226 is a processing unit that performs raster operations such as stencil, z test, blending, and the like. The ROP 226 then outputs processed graphics data that is stored in graphics memory. In some embodiments the ROP 226 includes compression logic to compress depth or color data that is written to memory and decompress depth or color data that is read from memory. The compression logic can be lossless compression logic that makes use of one or more of multiple compression algorithms. The type of compression that is performed by the ROP 226 can vary based on the statistical characteristics of the data to be compressed. For example, in one embodiment, delta color compression is performed on depth and color data on a per-tile basis.

[0079] The ROP 226 may be included within each processing cluster (e.g., cluster 214A-214N of FIG. 2A) instead of within the partition unit 220. In such embodiment, read and write requests for pixel data are transmitted over the memory crossbar 216 instead of pixel fragment data. The processed graphics data may be displayed on a display device, such as one of the one or more display device(s) 110 of FIG. 1, routed for further processing by the processor(s) 102, or routed for further processing by one of the processing entities within the parallel processor 200 of FIG. 2A.

[0080] FIG. 2C is a block diagram of a processing cluster 214 within a parallel processing unit. For example, the processing cluster is an instance of one of the processing clusters 214A-214N of FIG. 2A. The processing cluster 214 can be configured to execute many threads in parallel, where the term "thread" refers to an instance of a particular program executing on a particular set of input data. Optionally, single-instruction, multiple-data (SIMD) instruction issue techniques may be used to support parallel execution of a large number of threads without providing multiple independent instruction units. Alternatively, single-instruction, multiple-thread (SIMT) techniques may be used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each one of the processing clusters. Unlike a SIND execution regime, where all processing engines typically execute identical instructions, SIMT execution allows different threads to more readily follow divergent execution paths through a given thread program. Persons skilled in the art will understand that a SIMD processing regime represents a functional subset of a SIMT processing regime. The processing cluster 214 may generally implement any of the embodiments described herein, such as, for example the process 600 (FIGS. 6A and 6B), method 670 (FIG. 6D), method 690 (FIG. 6E), the process 3300 (FIG. 6F) and/or be combined with the SIMD architecture 660 (FIG. 6C), already discussed.

[0081] Operation of the processing cluster 214 can be controlled via a pipeline manager 232 that distributes processing tasks to SIMT parallel processors. The pipeline manager 232 receives instructions from the scheduler 210 of FIG. 2A and manages execution of those instructions via a graphics multiprocessor 234 and/or a texture unit 236. The illustrated graphics multiprocessor 234 is an exemplary instance of a SIMT parallel processor. However, various types of SIMT parallel processors of differing architectures may be included within the processing cluster 214. One or more instances of the graphics multiprocessor 234 can be included within a processing cluster 214. The graphics multiprocessor 234 can process data and a data crossbar 240 can be used to distribute the processed data to one of multiple possible destinations, including other shader units. The pipeline manager 232 can facilitate the distribution of processed data by specifying destinations for processed data to be distributed via the data crossbar 240.

[0082] Each graphics multiprocessor 234 within the processing cluster 214 can include an identical set of functional execution logic (e.g., arithmetic logic units, load-store units, etc.). The functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete. The functional execution logic supports a variety of operations including integer and floating-point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. The same functional-unit hardware could be leveraged to perform different operations and any combination of functional units may be present.

[0083] The instructions transmitted to the processing cluster 214 constitutes a thread. A set of threads executing across the set of parallel processing engines is a thread group. A thread group executes the same program on different input data. Each thread within a thread group can be assigned to a different processing engine within a graphics multiprocessor 234. A thread group may include fewer threads than the number of processing engines within the graphics multiprocessor 234. When a thread group includes fewer threads than the number of processing engines, one or more of the processing engines may be idle during cycles in which that thread group is being processed. A thread group may also include more threads than the number of processing engines within the graphics multiprocessor 234. When the thread group includes more threads than the number of processing engines within the graphics multiprocessor 234, processing can be performed over consecutive clock cycles. Optionally, multiple thread groups can be executed concurrently on the graphics multiprocessor 234.

[0084] The graphics multiprocessor 234 may include an internal cache memory to perform load and store operations. Optionally, the graphics multiprocessor 234 can forego an internal cache and use a cache memory (e.g., L1 cache 248) within the processing cluster 214. Each graphics multiprocessor 234 also has access to L2 caches within the partition units (e.g., partition units 220A-220N of FIG. 2A) that are shared among all processing clusters 214 and may be used to transfer data between threads. The graphics multiprocessor 234 may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. Any memory external to the parallel processing unit 202 may be used as global memory. Embodiments in which the processing cluster 214 includes multiple instances of the graphics multiprocessor 234 can share common instructions and data, which may be stored in the L1 cache 248.

[0085] Each processing cluster 214 may include an MMU 245 (memory management unit) that is configured to map virtual addresses into physical addresses. In other embodiments, one or more instances of the MMU 245 may reside within the memory interface 218 of FIG. 2A. The MMU 245 includes a set of page table entries (PTEs) used to map a virtual address to a physical address of a tile and optionally a cache line index. The MMU 245 may include address translation lookaside buffers (TLB) or caches that may reside within the graphics multiprocessor 234 or the L1 cache or processing cluster 214. The physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. The cache line index may be used to determine whether a request for a cache line is a hit or miss.

[0086] In graphics and computing applications, a processing cluster 214 may be configured such that each graphics multiprocessor 234 is coupled to a texture unit 236 for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering the texture data. Texture data is read from an internal texture L1 cache (not shown) or in some embodiments from the L1 cache within graphics multiprocessor 234 and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. Each graphics multiprocessor 234 outputs processed tasks to the data crossbar 240 to provide the processed task to another processing cluster 214 for further processing or to store the processed task in an L2 cache, local parallel processor memory, or system memory via the memory crossbar 216. A preROP 242 (pre-raster operations unit) is configured to receive data from graphics multiprocessor 234, direct data to ROP units, which may be located with partition units as described herein (e.g., partition units 220A-220N of FIG. 2A). The preROP 242 unit can perform optimizations for color blending, organize pixel color data, and perform address translations.

[0087] It will be appreciated that the core architecture described herein is illustrative and that variations and modifications are possible. Any number of processing units, e.g., graphics multiprocessor 234, texture units 236, preROPs 242, etc., may be included within a processing cluster 214. Further, while only one processing cluster 214 is shown, a parallel processing unit as described herein may include any number of instances of the processing cluster 214. Optionally, each processing cluster 214 can be configured to operate independently of other processing clusters 214 using separate and distinct processing units, L 1 caches, etc.

[0088] FIG. 2D shows an example of the graphics multiprocessor 234 in which the graphics multiprocessor 234 couples with the pipeline manager 232 of the processing cluster 214. The graphics multiprocessor 234 has an execution pipeline including but not limited to an instruction cache 252, an instruction unit 254, an address mapping unit 256, a register file 258, one or more general purpose graphics processing unit (GPGPU) cores 262, and one or more load/store units 266. The GPGPU cores 262 and load/store units 266 are coupled with cache memory 272 and shared memory 270 via a memory and cache interconnect 268. The graphics multiprocessor 234 may additionally include tensor and/or ray-tracing cores 263 that include hardware logic to accelerate matrix and/or ray-tracing operations.

[0089] The instruction cache 252 may receive a stream of instructions to execute from the pipeline manager 232. The instructions are cached in the instruction cache 252 and dispatched for execution by the instruction unit 254. The instruction unit 254 can dispatch instructions as thread groups (e.g., warps), with each thread of the thread group assigned to a different execution unit within GPGPU core 262. An instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. The address mapping unit 256 can be used to translate addresses in the unified address space into a distinct memory address that can be accessed by the load/store units 266.

[0090] The register file 258 provides a set of registers for the functional units of the graphics multiprocessor 234. The register file 258 provides temporary storage for operands connected to the data paths of the functional units (e.g., GPGPU cores 262, load/store units 266) of the graphics multiprocessor 234. The register file 258 may be divided between each of the functional units such that each functional unit is allocated a dedicated portion of the register file 258. For example, the register file 258 may be divided between the different warps being executed by the graphics multiprocessor 234.

[0091] The GPGPU cores 262 can each include floating point units (FPUs) and/or integer arithmetic logic units (ALUs) that are used to execute instructions of the graphics multiprocessor 234. In some implementations, the GPGPU cores 262 can include hardware logic that may otherwise reside within the tensor and/or ray-tracing cores 263. The GPGPU cores 262 can be similar in architecture or can differ in architecture. For example and in one embodiment, a first portion of the GPGPU cores 262 include a single precision FPU and an integer ALU while a second portion of the GPGPU cores include a double precision FPU. Optionally, the FPUs can implement the IEEE 754-2008 standard for floating point arithmetic or enable variable precision floating point arithmetic. The graphics multiprocessor 234 can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. One or more of the GPGPU cores can also include fixed or special function logic.

[0092] The GPGPU cores 262 may include SIMD logic capable of performing a single instruction on multiple sets of data. Optionally, GPGPU cores 262 can physically execute SIMD8, and SIMD16 instructions and logically execute SIMD1, SIMD2, and SIMD32 instructions. The SIMD instructions for the GPGPU cores can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data (SPMD) or SIMT architectures. Multiple threads of a program configured for the SIMT execution model can be executed via a single SIMD instruction. For example, and in one embodiment, eight SIMT threads that perform the same or similar operations can be executed in parallel via a single SIMD8 logic unit.

[0093] The memory and cache interconnect 268 is an interconnect network that connects each of the functional units of the graphics multiprocessor 234 to the register file 258 and to the shared memory 270. For example, the memory and cache interconnect 268 is a crossbar interconnect that allows the load/store unit 266 to implement load and store operations between the shared memory 270 and the register file 258. The register file 258 can operate at the same frequency as the GPGPU cores 262, thus data transfer between the GPGPU cores 262 and the register file 258 is very low latency. The shared memory 270 can be used to enable communication between threads that execute on the functional units within the graphics multiprocessor 234. The cache memory 272 can be used as a data cache for example, to cache texture data communicated between the functional units and the texture unit 236. The shared memory 270 can also be used as a program managed cached. Threads executing on the GPGPU cores 262 can programmatically store data within the shared memory in addition to the automatically cached data that is stored within the cache memory 272.

[0094] FIG. 3A-3C illustrate additional graphics multiprocessors, according to embodiments. FIG. 3A-3B illustrate graphics multiprocessors 325, 350, which are related to the graphics multiprocessor 234 of FIG. 2C and may be used in place of one of those. Therefore, the disclosure of any features in combination with the graphics multiprocessor 234 herein also discloses a corresponding combination with the graphics multiprocessor(s) 325, 350, but is not limited to such. FIG. 3C illustrates a graphics processing unit (GPU) 380 which includes dedicated sets of graphics processing resources arranged into multi-core groups 365A-365N, which correspond to the graphics multiprocessors 325, 350. The illustrated graphics multiprocessors 325, 350 and the multi-core groups 365A-365N can be streaming multiprocessors (SM) capable of simultaneous execution of a large number of execution threads.

[0095] The graphics multiprocessor 325 of FIG. 3A includes multiple additional instances of execution resource units relative to the graphics multiprocessor 234 of FIG. 2D. For example, the graphics multiprocessor 325 can include multiple instances of the instruction unit 332A-332B, register file 334A-334B, and texture unit(s) 344A-344B. The graphics multiprocessor 325 also includes multiple sets of graphics or compute execution units (e.g., GPGPU core 336A-336B, tensor core 337A-337B, ray-tracing core 338A-338B) and multiple sets of load/store units 340A-340B. The execution resource units have a common instruction cache 330, texture and/or data cache memory 342, and shared memory 346.

[0096] The various components can communicate via an interconnect fabric 327. The interconnect fabric 327 may include one or more crossbar switches to enable communication between the various components of the graphics multiprocessor 325. The interconnect fabric 327 may be a separate, high-speed network fabric layer upon which each component of the graphics multiprocessor 325 is stacked. The components of the graphics multiprocessor 325 communicate with remote components via the interconnect fabric 327. For example, the GPGPU cores 336A-336B, 337A-337B, and 3378A-338B can each communicate with shared memory 346 via the interconnect fabric 327. The interconnect fabric 327 can arbitrate communication within the graphics multiprocessor 325 to ensure a fair bandwidth allocation between components.

[0097] The graphics multiprocessor 350 of FIG. 3B includes multiple sets of execution resources 356A-356D, where each set of execution resource includes multiple instruction units, register files, GPGPU cores, and load store units, as illustrated in FIG. 2D and FIG. 3A. The execution resources 356A-356D can work in concert with texture unit(s) 360A-360D for texture operations, while sharing an instruction cache 354, and shared memory 353. For example, the execution resources 356A-356D can share an instruction cache 354 and shared memory 353, as well as multiple instances of a texture and/or data cache memory 358A-358B. The various components can communicate via an interconnect fabric 352 similar to the interconnect fabric 327 of FIG. 3A.

[0098] Persons skilled in the art will understand that the architecture described in FIG. 1, 2A-2D, and 3A-3B are descriptive and not limiting as to the scope of the present embodiments. Thus, the techniques described herein may be implemented on any properly configured processing unit, including, without limitation, one or more mobile application processors, one or more desktop or server central processing units (CPUs) including multi-core CPUs, one or more parallel processing units, such as the parallel processing unit 202 of FIG. 2A, as well as one or more graphics processors or special purpose processing units, without departure from the scope of the embodiments described herein.

[0099] The parallel processor or GPGPU as described herein may be communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general-purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or other interconnect (e.g., a high-speed interconnect such as PCIe or NVLink). In other embodiments, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions.

[0100] FIG. 3C illustrates a graphics processing unit (GPU) 380 which includes dedicated sets of graphics processing resources arranged into multi-core groups 365A-365N. While the details of only a single multi-core group 365A are provided, it will be appreciated that the other multi-core groups 365B-365N may be equipped with the same or similar sets of graphics processing resources. Details described with respect to the multi-core groups 365A-365N may also apply to any graphics multiprocessor 234, 325, 350 described herein.

[0101] As illustrated, a multi-core group 365A may include a set of graphics cores 370, a set of tensor cores 371, and a set of ray tracing cores 372. A scheduler/dispatcher 368 schedules and dispatches the graphics threads for execution on the various cores 370, 371, 372. A set of register files 369 store operand values used by the cores 370, 371, 372 when executing the graphics threads. These may include, for example, integer registers for storing integer values, floating point registers for storing floating point values, vector registers for storing packed data elements (integer and/or floating-point data elements) and tile registers for storing tensor/matrix values. The tile registers may be implemented as combined sets of vector registers.

[0102] One or more combined level 1 (L1) caches and shared memory units 373 store graphics data such as texture data, vertex data, pixel data, ray data, bounding volume data, etc., locally within each multi-core group 365A. One or more texture units 374 can also be used to perform texturing operations, such as texture mapping and sampling. A Level 2 (L2) cache 375 shared by all or a subset of the multi-core groups 365A-365N stores graphics data and/or instructions for multiple concurrent graphics threads. As illustrated, the L2 cache 375 may be shared across a plurality of multi-core groups 365A-365N. One or more memory controllers 367 couple the GPU 380 to a memory 366 which may be a system memory (e.g., DRAM) and/or a dedicated graphics memory (e.g., GDDR6 memory).

[0103] Input/output (I/O) circuitry 363 couples the GPU 380 to one or more I/O devices 362 such as digital signal processors (DSPs), network controllers, or user input devices. An on-chip interconnect may be used to couple the I/O devices 362 to the GPU 380 and memory 366. One or more I/O memory management units (IOMMUs) 364 of the I/O circuitry 363 couple the I/O devices 362 directly to the system memory 366. Optionally, the IOMMU 364 manages multiple sets of page tables to map virtual addresses to physical addresses in system memory 366. The I/O devices 362, CPU(s) 361, and GPU(s) 380 may then share the same virtual address space.

[0104] In one implementation of the IOMMU 364, the IOMMU 364 supports virtualization. In this case, it may manage a first set of page tables to map guest/graphics virtual addresses to guest/graphics physical addresses and a second set of page tables to map the guest/graphics physical addresses to system/host physical addresses (e.g., within system memory 366). The base addresses of each of the first and second sets of page tables may be stored in control registers and swapped out on a context switch (e.g., so that the new context is provided with access to the relevant set of page tables). While not illustrated in FIG. 3C, each of the cores 370, 371, 372 and/or multi-core groups 365A-365N may include translation lookaside buffers (TLBs) to cache guest virtual to guest physical translations, guest physical to host physical translations, and guest virtual to host physical translations.

[0105] The CPUs 361, GPUs 380, and I/O devices 362 may be integrated on a single semiconductor chip and/or chip package. The illustrated memory 366 may be integrated on the same chip or may be coupled to the memory controllers 367 via an off-chip interface. In one implementation, the memory 366 comprises GDDR6 memory which shares the same virtual address space as other physical system-level memories, although the underlying principles described herein are not limited to this specific implementation.

[0106] The tensor cores 371 may include a plurality of execution units specifically designed to perform matrix operations, which are the fundamental compute operation used to perform deep learning operations. For example, simultaneous matrix multiplication operations may be used for neural network training and inferencing. The tensor cores 371 may perform matrix processing using a variety of operand precisions including single precision floating-point (e.g., 32 bits), half-precision floating point (e.g., 16 bits), integer words (16 bits), bytes (8 bits), and half-bytes (4 bits). For example, a neural network implementation extracts features of each rendered scene, potentially combining details from multiple frames, to construct a high-quality final image.

[0107] In deep learning implementations, parallel matrix multiplication work may be scheduled for execution on the tensor cores 371. The training of neural networks, in particular, requires a significant number matrix dot product operations. In order to process an inner-product formulation of an N.times.N.times.N matrix multiply, the tensor cores 371 may include at least N dot-product processing elements. Before the matrix multiply begins, one entire matrix is loaded into tile registers and at least one column of a second matrix is loaded each cycle for N cycles. Each cycle, there are N dot products that are processed.

[0108] Matrix elements may be stored at different precisions depending on the particular implementation, including 16-bit words, 8-bit bytes (e.g., INT8) and 4-bit half-bytes (e.g., INT4). Different precision modes may be specified for the tensor cores 371 to ensure that the most efficient precision is used for different workloads (e.g., such as inferencing workloads which can tolerate quantization to bytes and half-bytes).

[0109] The ray tracing cores 372 may accelerate ray tracing operations for both real-time ray tracing and non-real-time ray tracing implementations. In particular, the ray tracing cores 372 may include ray traversal/intersection circuitry for performing ray traversal using bounding volume hierarchies (BVHs) and identifying intersections between rays and primitives enclosed within the BVH volumes. The ray tracing cores 372 may also include circuitry for performing depth testing and culling (e.g., using a Z buffer or similar arrangement). In one implementation, the ray tracing cores 372 perform traversal and intersection operations in concert with the image denoising techniques described herein, at least a portion of which may be executed on the tensor cores 371. For example, the tensor cores 371 may implement a deep learning neural network to perform denoising of frames generated by the ray tracing cores 372. However, the CPU(s) 361, graphics cores 370, and/or ray tracing cores 372 may also implement all or a portion of the denoising and/or deep learning algorithms.

[0110] In addition, as described above, a distributed approach to denoising may be employed in which the GPU 380 is in a computing device coupled to other computing devices over a network or high-speed interconnect. In this distributed approach, the interconnected computing devices may share neural network learning/training data to improve the speed with which the overall system learns to perform denoising for different types of image frames and/or different graphics applications.

[0111] The ray tracing cores 372 may process all BVH traversal and/or ray-primitive intersections, saving the graphics cores 370 from being overloaded with thousands of instructions per ray. For example, each ray tracing core 372 includes a first set of specialized circuitry for performing bounding box tests (e.g., for traversal operations) and/or a second set of specialized circuitry for performing the ray-triangle intersection tests (e.g., intersecting rays which have been traversed). Thus, for example, the multi-core group 365A can simply launch a ray probe, and the ray tracing cores 372 independently perform ray traversal and intersection and return hit data (e.g., a hit, no hit, multiple hits, etc.) to the thread context. The other cores 370, 371 are freed to perform other graphics or compute work while the ray tracing cores 372 perform the traversal and intersection operations.

[0112] Optionally, each ray tracing core 372 may include a traversal unit to perform BVH testing operations and/or an intersection unit which performs ray-primitive intersection tests. The intersection unit generates a "hit", "no hit", or "multiple hit" response, which it provides to the appropriate thread. During the traversal and intersection operations, the execution resources of the other cores (e.g., graphics cores 370 and tensor cores 371) are freed to perform other forms of graphics work.

[0113] In one optional embodiment described below, a hybrid rasterization/ray tracing approach is used in which work is distributed between the graphics cores 370 and ray tracing cores 372.

[0114] The ray tracing cores 372 (and/or other cores 370, 371) may include hardware support for a ray tracing instruction set such as Microsoft's DirectX Ray Tracing (DXR) which includes a DispatchRays command, as well as ray-generation, closest-hit, any-hit, and miss shaders, which enable the assignment of unique sets of shaders and textures for each object. Another ray tracing platform which may be supported by the ray tracing cores 372, graphics cores 370 and tensor cores 371 is Vulkan 1.1.85. Note, however, that the underlying principles described herein are not limited to any particular ray tracing ISA.

[0115] In general, the various cores 372, 371, 370 may support a ray tracing instruction set that includes instructions/functions for one or more of ray generation, closest hit, any hit, ray-primitive intersection, per-primitive and hierarchical bounding box construction, miss, visit, and exceptions. More specifically, a preferred embodiment includes ray tracing instructions to perform one or more of the following functions:

[0116] Ray Generation--Ray generation instructions may be executed for each pixel, sample, or other user-defined work assignment.

[0117] Closest Hit--A closest hit instruction may be executed to locate the closest intersection point of a ray with primitives within a scene.

[0118] Any Hit--An any hit instruction identifies multiple intersections between a ray and primitives within a scene, potentially to identify a new closest intersection point.

[0119] Intersection--An intersection instruction performs a ray-primitive intersection test and outputs a result.

[0120] Per-primitive Bounding box Construction--This instruction builds a bounding box around a given primitive or group of primitives (e.g., when building a new BVH or other acceleration data structure).

[0121] Miss--Indicates that a ray misses all geometry within a scene, or specified region of a scene.

[0122] Visit--Indicates the children volumes a ray will traverse.

[0123] Exceptions--Includes various types of exception handlers (e.g., invoked for various error conditions).

[0124] Techniques for GPU to Host Processor Interconnection

[0125] FIG. 4A illustrates an exemplary architecture in which a plurality of GPUs 410-413, e.g., such as the parallel processors 200 shown in FIG. 2A, are communicatively coupled to a plurality of multi-core processors 405-406 over high-speed links 440A-440D (e.g., buses, point-to-point interconnects, etc.). The high-speed links 440A-440D may support a communication throughput of 4 GB/s, 30 GB/s, 80 GB/s or higher, depending on the implementation. Various interconnect protocols may be used including, but not limited to, PCIe 4.0 or 5.0 and NVLink 2.0. However, the underlying principles described herein are not limited to any particular communication protocol or throughput.

[0126] Two or more of the GPUs 410-413 may be interconnected over high-speed links 442A-442B, which may be implemented using the same or different protocols/links than those used for high-speed links 440A-440D. Similarly, two or more of the multi-core processors 405-406 may be connected over high speed link 443 which may be symmetric multi-processor (SMP) buses operating at 20 GB/s, 30 GB/s, 120 GB/s or higher. Alternatively, all communication between the various system components shown in FIG. 4A may be accomplished using the same protocols/links (e.g., over a common interconnection fabric). As mentioned, however, the underlying principles described herein are not limited to any particular type of interconnect technology.

[0127] Each multi-core processor 405-406 may be communicatively coupled to a processor memory 401-402, via memory interconnects 430A-430B, respectively, and each GPU 410-413 is communicatively coupled to GPU memory 420-423 over GPU memory interconnects 450A-450D, respectively. The memory interconnects 430A-430B and 450A-450D may utilize the same or different memory access technologies. By way of example, and not limitation, the processor memories 401-402 and GPU memories 420-423 may be volatile memories such as dynamic random-access memories (DRAMs) (including stacked DRAMs), Graphics DDR SDRAM (GDDR) (e.g., GDDR5, GDDR6), or High Bandwidth Memory (HBM) and/or may be non-volatile memories such as 3D XPoint/Optane or Nano-Ram. For example, some portion of the memories may be volatile memory and another portion may be non-volatile memory (e.g., using a two-level memory (2LM) hierarchy).

[0128] As described below, although the various processors 405-406 and GPUs 410-413 may be physically coupled to a particular memory 401-402, 420-423, respectively, a unified memory architecture may be implemented in which the same virtual system address space (also referred to as the "effective address" space) is distributed among all of the various physical memories. For example, processor memories 401-402 may each comprise 64 GB of the system memory address space and GPU memories 420-423 may each comprise 32 GB of the system memory address space (resulting in a total of 256 GB addressable memory in this example).

[0129] FIG. 4B illustrates additional optional details for an interconnection between a multi-core processor 407 and a graphics acceleration module 446. In some embodiments, the multi-core processor 407 and a graphics acceleration module 446 may implement aspects of computing architecture 900 (FIG. 9A), architecture 960 (FIG. 9B), method 1010 (FIG. 9C), and method 1030 (FIG. 9D). The graphics acceleration module 446 may include one or more GPU chips integrated on a line card which is coupled to the processor 407 via the high-speed link 440. Alternatively, the graphics acceleration module 446 may be integrated on the same package or chip as the processor 407.

[0130] The illustrated processor 407 includes a plurality of cores 460A-460D, each with a translation lookaside buffer 461A-461D and one or more caches 462A-462D. The cores may include various other components for executing instructions and processing data which are not illustrated to avoid obscuring the underlying principles of the components described herein (e.g., instruction fetch units, branch prediction units, decoders, execution units, reorder buffers, etc.). The caches 462A-462D may comprise level 1 (L1) and level 2 (L2) caches. In addition, one or more shared caches 456 may be included in the caching hierarchy and shared by sets of the cores 460A-460D. For example, one embodiment of the processor 407 includes 24 cores, each with its own L1 cache, twelve shared L2 caches, and twelve shared L3 caches. In this embodiment, one of the L2 and L3 caches are shared by two adjacent cores. The processor 407 and the graphics accelerator integration module 446 connect with system memory 441, which may include processor memories 401-402.

[0131] Coherency is maintained for data and instructions stored in the various caches 462A-462D, 456 and system memory 441 via inter-core communication over a coherence bus 464. For example, each cache may have cache coherency logic/circuitry associated therewith to communicate to over the coherence bus 464 in response to detected reads or writes to particular cache lines. In one implementation, a cache snooping protocol is implemented over the coherence bus 464 to snoop cache accesses. Cache snooping/coherency techniques are well understood by those of skill in the art and will not be described in detail here to avoid obscuring the underlying principles described herein.

[0132] A proxy circuit 425 may be provided that communicatively couples the graphics acceleration module 446 to the coherence bus 464, allowing the graphics acceleration module 446 to participate in the cache coherence protocol as a peer of the cores. In particular, an interface 435 provides connectivity to the proxy circuit 425 over high-speed link 440 (e.g., a PCIe bus, NVLink, etc.) and an interface 437 connects the graphics acceleration module 446 to the high-speed link 440.

[0133] In one implementation, an accelerator integration circuit 436 provides cache management, memory access, context management, and interrupt management services on behalf of a plurality of graphics processing engines 431, 432, N of the graphics acceleration module 446. The graphics processing engines 431, 432, N may each comprise a separate graphics processing unit (GPU). Alternatively, the graphics processing engines 431, 432, N may comprise different types of graphics processing engines within a GPU such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In other words, the graphics acceleration module may be a GPU with a plurality of graphics processing engines 431-432, N or the graphics processing engines 431-432, N may be individual GPUs integrated on a common package, line card, or chip.

[0134] The accelerator integration circuit 436 may include a memory management unit (MMU) 439 for performing various memory management functions such as virtual-to-physical memory translations (also referred to as effective-to-real memory translations) and memory access protocols for accessing system memory 441. The MMU 439 may also include a translation lookaside buffer (TLB) (not shown) for caching the virtual/effective to physical/real address translations. In one implementation, a cache 438 stores commands and data for efficient access by the graphics processing engines 431-432, N. The data stored in cache 438 and graphics memories 433-434, M may be kept coherent with the core caches 462A-462D, 456 and system memory 411. As mentioned, this may be accomplished via proxy circuit 425 which takes part in the cache coherency mechanism on behalf of cache 438 and memories 433-434, M (e.g., sending updates to the cache 438 related to modifications/accesses of cache lines on processor caches 462A-462D, 456 and receiving updates from the cache 438).

[0135] A set of registers 445 store context data for threads executed by the graphics processing engines 431-432, N and a context management circuit 448 manages the thread contexts. For example, the context management circuit 448 may perform save and restore operations to save and restore contexts of the various threads during contexts switches (e.g., where a first thread is saved and a second thread is stored so that the second thread can be execute by a graphics processing engine). For example, on a context switch, the context management circuit 448 may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore the register values when returning to the context. An interrupt management circuit 447, for example, may receive and processes interrupts received from system devices.

[0136] In one implementation, virtual/effective addresses from a graphics processing engine 431 are translated to real/physical addresses in system memory 411 by the MMU 439. Optionally, the accelerator integration circuit 436 supports multiple (e.g., 4, 8, 16) graphics accelerator modules 446 and/or other accelerator devices. The graphics accelerator module 446 may be dedicated to a single application executed on the processor 407 or may be shared between multiple applications. Optionally, a virtualized graphics execution environment is provided in which the resources of the graphics processing engines 431-432, N are shared with multiple applications or virtual machines (VMs). The resources may be subdivided into "slices" which are allocated to different VMs and/or applications based on the processing requirements and priorities associated with the VMs and/or applications.

[0137] Thus, the accelerator integration circuit 436 acts as a bridge to the system for the graphics acceleration module 446 and provides address translation and system memory cache services. In one embodiment, to facilitate the bridging functionality, the accelerator integration circuit 436 may also include shared I/O 497 (e.g., PCIe, USB) and hardware to enable system control of voltage, clocking, performance, thermals, and security. The shared I/O 497 may utilize separate physical connections or may traverse the high-speed link 440. In addition, the accelerator integration circuit 436 may provide virtualization facilities for the host processor to manage virtualization of the graphics processing engines, interrupts, and memory management.

[0138] Because hardware resources of the graphics processing engines 431-432, N are mapped explicitly to the real address space seen by the host processor 407, any host processor can address these resources directly using an effective address value. One optional function of the accelerator integration circuit 436 is the physical separation of the graphics processing engines 431-432, N so that they appear to the system as independent units.

[0139] One or more graphics memories 433-434, M may be coupled to each of the graphics processing engines 431-432, N, respectively. The graphics memories 433-434, M store instructions and data being processed by each of the graphics processing engines 431-432, N. The graphics memories 433-434, M may be volatile memories such as DRAMs (including stacked DRAMs), GDDR memory (e.g., GDDR5, GDDR6), or HBM, and/or may be non-volatile memories such as 3D XPoint/Optane or Nano-Ram.

[0140] To reduce data traffic over the high-speed link 440, biasing techniques may be used to ensure that the data stored in graphics memories 433-434, M is data which will be used most frequently by the graphics processing engines 431-432, N and preferably not used by the cores 460A-460D (at least not frequently). Similarly, the biasing mechanism attempts to keep data needed by the cores (and preferably not the graphics processing engines 431-432, N) within the caches 462A-462D, 456 of the cores and system memory 411.

[0141] According to a variant shown in FIG. 4C the accelerator integration circuit 436 is integrated within the processor 407. The graphics processing engines 431-432, N communicate directly over the high-speed link 440 to the accelerator integration circuit 436 via interface 437 and interface 435 (which, again, may be utilize any form of bus or interface protocol). The accelerator integration circuit 436 may perform the same operations as those described with respect to FIG. 4B, but potentially at a higher throughput given its close proximity to the coherency bus 464 and caches 462A-462D, 456.

[0142] The embodiments described may support different programming models including a dedicated-process programming model (no graphics acceleration module virtualization) and shared programming models (with virtualization). The latter may include programming models which are controlled by the accelerator integration circuit 436 and programming models which are controlled by the graphics acceleration module 446.

[0143] In the embodiments of the dedicated process model, graphics processing engines 431-432, N may be dedicated to a single application or process under a single operating system. The single application can funnel other application requests to the graphics engines 431-432, N, providing virtualization within a VM/partition.

[0144] In the dedicated-process programming models, the graphics processing engines 431-432, N, may be shared by multiple VM/application partitions. The shared models require a system hypervisor to virtualize the graphics processing engines 431-432, N to allow access by each operating system. For single-partition systems without a hypervisor, the graphics processing engines 431-432, N are owned by the operating system. In both cases, the operating system can virtualize the graphics processing engines 431-432, N to provide access to each process or application.

[0145] For the shared programming model, the graphics acceleration module 446 or an individual graphics processing engine 431-432, N selects a process element using a process handle. The process elements may be stored in system memory 411 and be addressable using the effective address to real address translation techniques described herein. The process handle may be an implementation-specific value provided to the host process when registering its context with the graphics processing engine 431-432, N (that is, calling system software to add the process element to the process element linked list). The lower 16-bits of the process handle may be the offset of the process element within the process element linked list.

[0146] FIG. 4D illustrates an exemplary accelerator integration slice 490. As used herein, a "slice" comprises a specified portion of the processing resources of the accelerator integration circuit 436. Application effective address space 482 within system memory 411 stores process elements 483. The process elements 483 may be stored in response to GPU invocations 481 from applications 480 executed on the processor 407. A process element 483 contains the process state for the corresponding application 480. A work descriptor (WD) 484 contained in the process element 483 can be a single job requested by an application or may contain a pointer to a queue of jobs. In the latter case, the WD 484 is a pointer to the job request queue in the application's address space 482.

[0147] The graphics acceleration module 446 and/or the individual graphics processing engines 431-432, N can be shared by all or a subset of the processes in the system. For example, the technologies described herein may include an infrastructure for setting up the process state and sending a WD 484 to a graphics acceleration module 446 to start a job in a virtualized environment.

[0148] In one implementation, the dedicated-process programming model is implementation-specific. In this model, a single process owns the graphics acceleration module 446 or an individual graphics processing engine 431. Because the graphics acceleration module 446 is owned by a single process, the hypervisor initializes the accelerator integration circuit 436 for the owning partition and the operating system initializes the accelerator integration circuit 436 for the owning process at the time when the graphics acceleration module 446 is assigned.

[0149] In operation, a WD fetch unit 491 in the accelerator integration slice 490 fetches the next WD 484 which includes an indication of the work to be done by one of the graphics processing engines of the graphics acceleration module 446. Data from the WD 484 may be stored in registers 445 and used by the MMU 439, interrupt management circuit 447 and/or context management circuit 448 as illustrated. For example, the MMU 439 may include segment/page walk circuitry for accessing segment/page tables 486 within the OS virtual address space 485. The interrupt management circuit 447 may process interrupt events 492 received from the graphics acceleration module 446. When performing graphics operations, an effective address 493 generated by a graphics processing engine 431-432, N is translated to a real address by the MMU 439.

[0150] The same set of registers 445 may be duplicated for each graphics processing engine 431-432, N and/or graphics acceleration module 446 and may be initialized by the hypervisor or operating system. Each of these duplicated registers may be included in an accelerator integration slice 490. Exemplary registers that may be initialized by the hypervisor are shown in Table 1.

TABLE-US-00001 TABLE 1 Hypervisor Initialized Registers 1 Slice Control Register 2 Real Address (RA) Scheduled Processes Area Pointer 3 Authority Mask Override Register 4 Interrupt Vector Table Entry Offset 5 Interrupt Vector Table Entry Limit 6 State Register 7 Logical Partition ID 8 Real address (RA) Hypervisor Accelerator Utilization Record Pointer 9 Storage Description Register

[0151] Exemplary registers that may be initialized by the operating system are shown in Table 2.

TABLE-US-00002 TABLE 2 Operating System Initialized Registers 1 Process and Thread Identification 2 Effective Address (EA) Context Save/Restore Pointer 3 Virtual Address (VA) Accelerator Utilization Record Pointer 4 Virtual Address (VA) Storage Segment Table Pointer 5 Authority Mask 6 Work descriptor

[0152] Each WD 484 may be specific to a particular graphics acceleration module 446 and/or graphics processing engine 431-432, N. It contains all the information a graphics processing engine 431-432, N requires to do its work or it can be a pointer to a memory location where the application has set up a command queue of work to be completed.

[0153] FIG. 4E illustrates additional optional details of a shared model. It includes a hypervisor real address space 498 in which a process element list 499 is stored. The hypervisor real address space 498 is accessible via a hypervisor 496 which virtualizes the graphics acceleration module engines for the operating system 495.

[0154] The shared programming models allow for all or a subset of processes from all or a subset of partitions in the system to use a graphics acceleration module 446. There are two programming models where the graphics acceleration module 446 is shared by multiple processes and partitions: time-sliced shared and graphics directed shared.

[0155] In this model, the system hypervisor 496 owns the graphics acceleration module 446 and makes its function available to all operating systems 495. For a graphics acceleration module 446 to support virtualization by the system hypervisor 496, the graphics acceleration module 446 may adhere to the following requirements: 1) An application's job request must be autonomous (that is, the state does not need to be maintained between jobs), or the graphics acceleration module 446 must provide a context save and restore mechanism. 2) An application's job request is guaranteed by the graphics acceleration module 446 to complete in a specified amount of time, including any translation faults, or the graphics acceleration module 446 provides the ability to preempt the processing of the job. 3) The graphics acceleration module 446 must be guaranteed fairness between processes when operating in the directed shared programming model.

[0156] For the shared model, the application 480 may be required to make an operating system 495 system call with a graphics acceleration module 446 type, a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). The graphics acceleration module 446 type describes the targeted acceleration function for the system call. The graphics acceleration module 446 type may be a system-specific value. The WD is formatted specifically for the graphics acceleration module 446 and can be in the form of a graphics acceleration module 446 command, an effective address pointer to a user-defined structure, an effective address pointer to a queue of commands, or any other data structure to describe the work to be done by the graphics acceleration module 446. In one embodiment, the AMR value is the AMR state to use for the current process. The value passed to the operating system is similar to an application setting the AMR. If the accelerator integration circuit 436 and graphics acceleration module 446 implementations do not support a User Authority Mask Override Register (UAMOR), the operating system may apply the current UAMOR value to the AMR value before passing the AMR in the hypervisor call. The hypervisor 496 may optionally apply the current Authority Mask Override Register (AMOR) value before placing the AMR into the process element 483. The CSRP may be one of the registers 445 containing the effective address of an area in the application's address space 482 for the graphics acceleration module 446 to save and restore the context state. This pointer is optional if no state is required to be saved between jobs or when a job is preempted. The context save/restore area may be pinned system memory.

[0157] Upon receiving the system call, the operating system 495 may verify that the application 480 has registered and been given the authority to use the graphics acceleration module 446. The operating system 495 then calls the hypervisor 496 with the information shown in Table 3.

TABLE-US-00003 TABLE 3 OS to Hypervisor Call Parameters 1 A work descriptor (WD) 2 An Authority Mask Register (AMR) value (potentially masked). 3 An effective address (EA) Context Save/Restore Area Pointer (CSRP) 4 A process ID (PID) and optional thread ID (TID) 5 A virtual address (VA) accelerator utilization record pointer (AURP) 6 The virtual address of the storage segment table pointer (SSTP) 7 A logical interrupt service number (LISN)

[0158] Upon receiving the hypervisor call, the hypervisor 496 verifies that the operating system 495 has registered and been given the authority to use the graphics acceleration module 446. The hypervisor 496 then puts the process element 483 into the process element linked list for the corresponding graphics acceleration module 446 type. The process element may include the information shown in Table 4.

TABLE-US-00004 TABLE 4 Process Element Information 1 A work descriptor (WD) 2 An Authority Mask Register (AMR) value (potentially masked). 3 An effective address (EA) Context Save/Restore Area Pointer (CSRP) 4 A process ID (PID) and optional thread ID (TID) 5 A virtual address (VA) accelerator utilization record pointer (AURP) 6 The virtual address of the storage segment table pointer (SSTP) 7 A logical interrupt service number (LISN) 8 Interrupt vector table, derived from the hypervisor call parameters. 9 A state register (SR) value 10 A logical partition ID (LPID) 11 A real address (RA) hypervisor accelerator utilization record pointer 12 The Storage Descriptor Register (SDR)

[0159] The hypervisor may initialize a plurality of accelerator integration slice 490 registers 445.

[0160] As illustrated in FIG. 4F, in one optional implementation a unified memory addressable via a common virtual memory address space used to access the physical processor memories 401-402 and GPU memories 420-423 is employed. In this implementation, operations executed on the GPUs 410-413 utilize the same virtual/effective memory address space to access the processors memories 401-402 and vice versa, thereby simplifying programmability. A first portion of the virtual/effective address space may be allocated to the processor memory 401, a second portion to the second processor memory 402, a third portion to the GPU memory 420, and so on. The entire virtual/effective memory space (sometimes referred to as the effective address space) may thereby be distributed across each of the processor memories 401-402 and GPU memories 420-423, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory.

[0161] Bias/coherence management circuitry 494A-494E within one or more of the MMUs 439A-439E may be provided that ensures cache coherence between the caches of the host processors (e.g., 405) and the GPUs 410-413 and implements biasing techniques indicating the physical memories in which certain types of data should be stored. While multiple instances of bias/coherence management circuitry 494A-494E are illustrated in FIG. 4F, the bias/coherence circuitry may be implemented within the MMU of one or more host processors 405 and/or within the accelerator integration circuit 436. In some embodiments, the host processor 405 may encrypt data and pointers with a key and share the key with GPUs 410-413 as described in the embodiments of process 3100 (FIG. 7E), method 3000 (FIG. 7F), and process 3200 (FIG. 7G), cryptographic diffusion and confusion 2580 (FIG. 7H) and/or resources diagram 2584 (FIG. 7I).

[0162] The GPU-attached memory 420-423 may be mapped as part of system memory, and accessed using shared virtual memory (SVM) technology, but without suffering the typical performance drawbacks associated with full system cache coherence. The ability to GPU-attached memory 420-423 to be accessed as system memory without onerous cache coherence overhead provides a beneficial operating environment for GPU offload. This arrangement allows the host processor 405 software to setup operands and access computation results, without the overhead of tradition I/O DMA data copies. Such traditional copies involve driver calls, interrupts and memory mapped I/O (MMIO) accesses that are all inefficient relative to simple memory accesses. At the same time, the ability to access GPU attached memory 420-423 without cache coherence overheads can be critical to the execution time of an offloaded computation. In cases with substantial streaming write memory traffic, for example, cache coherence overhead can significantly reduce the effective write bandwidth seen by a GPU 410-413. The efficiency of operand setup, the efficiency of results access, and the efficiency of GPU computation all play a role in determining the effectiveness of GPU offload.

[0163] A selection of between GPU bias and host processor bias may be driven by a bias tracker data structure. A bias table may be used, for example, which may be a page-granular structure (i.e., controlled at the granularity of a memory page) that includes 1 or 2 bits per GPU-attached memory page. The bias table may be implemented in a stolen memory range of one or more GPU-attached memories 420-423, with or without a bias cache in the GPU 410-413 (e.g., to cache frequently/recently used entries of the bias table). Alternatively, the entire bias table may be maintained within the GPU.

[0164] In one implementation, the bias table entry associated with each access to the GPU-attached memory 420-423 is accessed prior the actual access to the GPU memory, causing the following operations. First, local requests from the GPU 410-413 that find their page in GPU bias are forwarded directly to a corresponding GPU memory 420-423. Local requests from the GPU that find their page in host bias are forwarded to the processor 405 (e.g., over a high-speed link as discussed above). Optionally, requests from the processor 405 that find the requested page in host processor bias complete the request like a normal memory read. Alternatively, requests directed to a GPU-biased page may be forwarded to the GPU 410-413. The GPU may then transition the page to a host processor bias if it is not currently using the page.

[0165] The bias state of a page can be changed either by a software-based mechanism, a hardware-assisted software-based mechanism, or, for a limited set of cases, a purely hardware-based mechanism.

[0166] One mechanism for changing the bias state employs an API call (e.g., OpenCL), which, in turn, calls the GPU's device driver which, in turn, sends a message (or enqueues a command descriptor) to the GPU directing it to change the bias state and, for some transitions, perform a cache flushing operation in the host. The cache flushing operation is required for a transition from host processor 405 bias to GPU bias, but is not required for the opposite transition.

[0167] Cache coherency may be maintained by temporarily rendering GPU-biased pages uncacheable by the host processor 405. To access these pages, the processor 405 may request access from the GPU 410 which may or may not grant access right away, depending on the implementation. Thus, to reduce communication between the host processor 405 and GPU 410 it is beneficial to ensure that GPU-biased pages are those which are required by the GPU but not the host processor 405 and vice versa.

[0168] Graphics Processing Pipeline

[0169] FIG. 5 illustrates a graphics processing pipeline 500. A graphics multiprocessor, such as graphics multiprocessor 234 as in FIG. 2D, graphics multiprocessor 325 of FIG. 3A, graphics multiprocessor 350 of FIG. 3B can implement the illustrated graphics processing pipeline 500. The graphics multiprocessor can be included within the parallel processing subsystems as described herein, such as the parallel processor 200 of FIG. 2A, which may be related to the parallel processor(s) 112 of FIG. 1 and may be used in place of one of those. The various parallel processing systems can implement the graphics processing pipeline 500 via one or more instances of the parallel processing unit (e.g., parallel processing unit 202 of FIG. 2A) as described herein. For example, a shader unit (e.g., graphics multiprocessor 234 of FIG. 2C) may be configured to perform the functions of one or more of a vertex processing unit 504, a tessellation control processing unit 508, a tessellation evaluation processing unit 512, a geometry processing unit 516, and a fragment/pixel processing unit 524. The functions of data assembler 502, primitive assemblers 506, 514, 518, tessellation unit 510, rasterizer 522, and raster operations unit 526 may also be performed by other processing engines within a processing cluster (e.g., processing cluster 214 of FIG. 2A) and a corresponding partition unit (e.g., partition unit 220A-220N of FIG. 2A). The graphics processing pipeline 500 may also be implemented using dedicated processing units for one or more functions. It is also possible that one or more portions of the graphics processing pipeline 500 are performed by parallel processing logic within a general-purpose processor (e.g., CPU). Optionally, one or more portions of the graphics processing pipeline 500 can access on-chip memory (e.g., parallel processor memory 222 as in FIG. 2A) via a memory interface 528, which may be an instance of the memory interface 218 of FIG. 2A. The graphics processor pipeline 500 may also be implemented via a multi-core group 365A as in FIG. 3C.

[0170] The data assembler 502 is a processing unit that may collect vertex data for surfaces and primitives. The data assembler 502 then outputs the vertex data, including the vertex attributes, to the vertex processing unit 504. The vertex processing unit 504 is a programmable execution unit that executes vertex shader programs, lighting and transforming vertex data as specified by the vertex shader programs. The vertex processing unit 504 reads data that is stored in cache, local or system memory for use in processing the vertex data and may be programmed to transform the vertex data from an object-based coordinate representation to a world space coordinate space or a normalized device coordinate space.

[0171] A first instance of a primitive assembler 506 receives vertex attributes from the vertex processing unit 504. The primitive assembler 506 readings stored vertex attributes as needed and constructs graphics primitives for processing by tessellation control processing unit 508. The graphics primitives include triangles, line segments, points, patches, and so forth, as supported by various graphics processing application programming interfaces (APIs).

[0172] The tessellation control processing unit 508 treats the input vertices as control points for a geometric patch. The control points are transformed from an input representation from the patch (e.g., the patch's bases) to a representation that is suitable for use in surface evaluation by the tessellation evaluation processing unit 512. The tessellation control processing unit 508 can also compute tessellation factors for edges of geometric patches. A tessellation factor applies to a single edge and quantifies a view-dependent level of detail associated with the edge. A tessellation unit 510 is configured to receive the tessellation factors for edges of a patch and to tessellate the patch into multiple geometric primitives such as line, triangle, or quadrilateral primitives, which are transmitted to a tessellation evaluation processing unit 512. The tessellation evaluation processing unit 512 operates on parameterized coordinates of the subdivided patch to generate a surface representation and vertex attributes for each vertex associated with the geometric primitives.

[0173] A second instance of a primitive assembler 514 receives vertex attributes from the tessellation evaluation processing unit 512, reading stored vertex attributes as needed, and constructs graphics primitives for processing by the geometry processing unit 516. The geometry processing unit 516 is a programmable execution unit that executes geometry shader programs to transform graphics primitives received from primitive assembler 514 as specified by the geometry shader programs. The geometry processing unit 516 may be programmed to subdivide the graphics primitives into one or more new graphics primitives and calculate parameters used to rasterize the new graphics primitives.

[0174] The geometry processing unit 516 may be able to add or delete elements in the geometry stream. The geometry processing unit 516 outputs the parameters and vertices specifying new graphics primitives to primitive assembler 518. The primitive assembler 518 receives the parameters and vertices from the geometry processing unit 516 and constructs graphics primitives for processing by a viewport scale, cull, and clip unit 520. The geometry processing unit 516 reads data that is stored in parallel processor memory or system memory for use in processing the geometry data. The viewport scale, cull, and clip unit 520 performs clipping, culling, and viewport scaling and outputs processed graphics primitives to a rasterizer 522.

[0175] The rasterizer 522 can perform depth culling and other depth-based optimizations. The rasterizer 522 also performs scan conversion on the new graphics primitives to generate fragments and output those fragments and associated coverage data to the fragment/pixel processing unit 524. The fragment/pixel processing unit 524 is a programmable execution unit that is configured to execute fragment shader programs or pixel shader programs. The fragment/pixel processing unit 524 transforming fragments or pixels received from rasterizer 522, as specified by the fragment or pixel shader programs. For example, the fragment/pixel processing unit 524 may be programmed to perform operations included but not limited to texture mapping, shading, blending, texture correction and perspective correction to produce shaded fragments or pixels that are output to a raster operations unit 526. The fragment/pixel processing unit 524 can read data that is stored in either the parallel processor memory or the system memory for use when processing the fragment data. Fragment or pixel shader programs may be configured to shade at sample, pixel, tile, or other granularities depending on the sampling rate configured for the processing units.

[0176] The raster operations unit 526 is a processing unit that performs raster operations including, but not limited to stencil, z-test, blending, and the like, and outputs pixel data as processed graphics data to be stored in graphics memory (e.g., parallel processor memory 222 as in FIG. 2A, and/or system memory 104 as in FIG. 1), to be displayed on the one or more display device(s) 110 or for further processing by one of the one or more processor(s) 102 or parallel processor(s) 112. The raster operations unit 526 may be configured to compress z or color data that is written to memory and decompress z or color data that is read from memory.

[0177] Encryption Technology

[0178] Fine Grain Per Thread and Per GPU Slice Isolation (FIGS. 6A-6F)

[0179] As illustrated in FIG. 6A, some embodiments are drawn to low latency bit length-parameterizable ciphers to encrypt GPU thread data in a SIMD environment. A different encryption key may be used per lane enabling a fine granular encryption scheme. A granular, lane-specific encryption process 600 is illustrated.

[0180] In this example, a graphics processor core 602 includes a first lane 602a, a second lane 602b and a N lane 602c are illustrated. Each of the first lane 602a, second lane 602b and N lane 602c may execute a SIMD and/or SIMT process. For example, each of the SIMD lanes may process a different GPU thread associated with different tenants. Each of the first lane 602a, the second lane 602b and the N lane 602c may include hardware elements, such as vector register elements, thread processors, memory, etc. In some embodiments, each of the threads must present credentials for security and to identify appropriate encryption keys.

[0181] For example, the first lane 602a may execute a first thread associated with a first tenant, and the second lane 602b may execute a second thread associated with a second tenant, the N lane 602c may execute a N thread associated with an N tenant. The first lane 602a, second lane 602b and N lane 602c may be associated with a same computing architecture (e.g., located on a same SoC and/or graphics processor), and in particular a same core of the graphics processor 602. In this example, different threads may be encrypted differently. For example, in a multi-tenant scenario, multiple tenants may share resources. Some conventional application may only enforce encryption enforcement at a context level so that data for each context is isolated into different portions of a graphics processor at a core level and encrypted accordingly. For example, each context may need a separate core to execute on and may not share the core with other contexts. Such applications may not encrypt at a granular level that permits dispersed distribution of context data throughout the first-N lanes 602a-602c of the graphics processor core 602 (e.g., in a discontinuous fashion) and inefficiently uses the core (e.g., if a context cannot use all lanes). Some embodiments efficiently enforce isolation boundaries at a lane level such that different contexts (e.g., tenants) may share a same core, such as graphics processor core 602.

[0182] A key manager 602j may provide a first key 602g, second key 602h and N key 602i to the first encryption engine 602d, second encryption engine 602e and N encryption engine 602f respectively based on workloads and the credentials. For example, the key manager 602j may identify a context and/or tenant, as well as credentials associated with threads being executed, identify a key associated with the context and/or tenant and provide the key to the appropriate first encryption engine 602d, second encryption engine 602e and N encryption engine 602f.

[0183] In this example, the first lane 602a may process a first thread associated with a first tenant (or first context), and generate data associated with the first thread (e.g., first thread is processed and generates data). The key manager 602j may identify that the first lane 602a is executing on behalf of the first tenant and provide the first key 602g to the first encryption engine 602d based on the first key being assigned to the first tenant. As the first lane 602a generates data associated with the first thread, the first encryption engine 602d encrypts the data with the first key 602g.

[0184] The second lane 602b may process a second thread associated with a second tenant (or second context), and generates data associated with the second thread (e.g., second thread is processed and generates data). The key manager 602j may identify that the second lane 602b is executing on behalf of the second tenant and provide the second key 602h to the second encryption engine 602e based on the second key being assigned to the first tenant. As the second lane 602b generates data associated with the second thread, the second encryption engine 602e encrypts the data with the second key 602h. It is worthwhile to note that the first encryption engine 602d, the second encryption engine 602e and N encryption engine 602f may concurrently encrypt data from the first lane 602a, second lane 602b and N lane 602c in synchronization of clock cycles.

[0185] The N lane 602c may process an N thread associated with an N tenant (or N context), and generate data associated with the N thread (e.g., N thread is processed and generates data). The key manager 602j may identify that the N lane 602c is executing on behalf of the N tenant and provide the N key 602i to the N encryption engine 602f based on the N key being assigned to the N tenant. As the N lane 602c generates data associated with the N thread, the N encryption engine 602f encrypts the data with the N key 602i.

[0186] Thus, each of the first lane 602a, second lane 602b and N lane 602c may be coupled with to a dedicated encryption engine of the first encryption engine 602d, the second encryption engine 602e and the N encryption engine 602f to securely encrypt data. As such, each of the first lane 602a, second lane 602b and N lane 602c may have the flexibility to be encrypted differently than the other lanes of the first lane 602a, second lane 602b and N lane 602c, isolating threads at a granular lane level as opposed to a coarse core level.

[0187] The process 600 provides the encrypted data to device memory 604, 606. The device memory 604 may store encrypted first data 604a generated by the first lane 602a, encrypted second data 604b generated by the second lane 602b and encrypted N data 604N generated by the N lane 602c. The process 600 may then identify data requests 610. In some embodiments, each of the encrypted first data 604a, encrypted second data 604b and encrypted N data 604N may be stored in association with credentials for a thread that generated the respective data to facilitate retrieval (e.g., by a CPU and/or the GPU).

[0188] Furthermore, FIG. 6A illustrates a 1 to 1 association between an encrypted data, a lane such as first-n lanes 602a-602c, a graphics engine and an encryption/decryption engine such as the first encryption engine 602d, the second encryption engine 602e, and the N encryption engine 602f, it is to be understood that in some embodiments a "lane" may include multiple graphics compute engines, and/or encryption engines. Thus, some embodiments may include an encryption/decryption engine per graphics engine, with each lane include multiple encryption/decryption engines and graphics engines producing data that is encrypted differently from each other with the encryption/decryption engines.

[0189] In some embodiments, a policy for determining the number of compute resources, memory, buffers, engines etc. may be based on an "Edge or Cloud workload" that may contain a Service Level Agreement (SLA) that authorizes use of greater or fewer resources on a GPU and a target completion time as a form of quality of service (QoS) for execution of a workload.

[0190] Turning now to FIG. 6B, process 600 may execute data specific decryption 612 based on the encrypted first data 604a, encrypted second data 604b and encrypted N data 604N retrieved from the device memory 604. For example, the key manager 602j may identify a marker or identification from the encrypted first data 604a, encrypted second data 604b and encrypted N data 604N to identify appropriate keys for decryption based on the corresponding encryption keys. In some embodiments, a method for provisioning a user and/or tenant key into the key manager 602j (e.g., a crypto key manager) uses a (Process Address Space ID) PASID structure or similar structure that maintains a table of per-tenant context that allows the key manager 602j to relate various tenant-specific keys to a tenant `slice`. The key manager 602j may use a handle, PASID value, a public key or a tenant identifier as the `marker` that identifies the tenant security context.

[0191] In some embodiments, the key manager 602j may identify that the encrypted first data 604a was encrypted by the first encryption engine 602d, and a time that the encryption occurred. Based on these identifications, the key manager 602j may determine that the first encryption engine 602d was utilizing the first key 602g during encryption of the encrypted first data 604a. Other implementations may be possible as well. For example, the encrypted first data 604a may include a value or field indicating that the first key 602g was used to encrypt the encrypted first data 604a. In some embodiments, credentials associated with a thread requesting the encrypted first data 604a may be verified and the first key 602g may be identified based on the credentials (e.g., the thread is associated with the first tenant).

[0192] In this example, the encrypted first data 604a is to be processed by the first lane 602a, so the key manager 602j provides the first key 602g to the first encryption engine 602d. Likewise, the key manager 602j may provide the second key 602h to the second encryption engine 602e based on an identification that the encrypted second data 604b is assigned to the second lane 602b. Similarly, the key manager 602j may provide the third key 602i to the N encryption engine 602f based on an identification that the encrypted N data 604N is assigned to the N lane 602c. Thus, the first encryption engine 602d, the second encryption engine 602e and the N encryption engine 602f may decrypt the encrypted first data 604a, encrypted second data 604b and encrypted N data 604N to generate decrypted first data 614a, decrypted second data 614b and decrypted N data 614N. The first lane 602a, second lane 602b and N lane 602c may begin further processing on the decrypted first data 614a, decrypted second data 614b and decrypted N data 614N.

[0193] It will be understood that the above operations are flexible. For example, the data may be distributed different. For example, the first lane 602a may generate data that is encrypted with the first encryption engine 602d. Later, the data may be retrieved, decrypted by the N encryption engine 602f and operated on by the N lane 602c. That is, data may be transferred between lanes assuming that security protocols are complied with.

[0194] In some embodiments, the first encryption engine 602d, second encryption engine 602e and N encryption engine 602f may implement a specific parameterizable cipher to encrypt GPU thread data. Each workload (e.g., first thread, second thread, N thread) may present credentials to the graphics processor core 602 to request exclusive use of a graphics processor core 602 slice. The request may be supported as part of a graphics processor core 602 instruction. Furthermore, the request may also come to the graphics processor core 602 as part of an associated driver. Responsive to the request, various keys may be generated and utilized for encryption and decryption in processes associated with the workload and/or tenant by the key manager 602j. In some embodiments, a graphics processor core 602 slice may belong to a single isolated thread or a group of isolated threads.

[0195] Thus, some embodiments may enable encryption at far more granular levels (e.g., 32 bits and/or 64 bits) corresponding to lane size. Concurrently, multiple lanes may be encrypted according to different encryption keys to enable tenants to utilize a same graphics processor core 602 while respecting privacy, isolation and data compartmentalization between tenants. Furthermore, doing so enable more flexibility to write code that multiplies matrices. For example, some applications may set scalars of a fused multiply--add (FMA), which are part of the same vector FMA, need to be associated with the same workload, up to some minimum acceptable size. Embodiments as described herein may use each parallel scalar FMA, which is part of a vector FMA, may be associated with a different isolated workload.

[0196] Furthermore, the encryptions and decryptions may occur inside the graphics processor core 602 to avoid transference of unencrypted data along busses or other mediums. The graphics processor core 602 may be a tensor core that execute 3 clocks per operation (e.g., within circuits currently meet timings in frequencies up to 4 GHz) and in parallel across threads. In some embodiments, the encryptions and decryptions may occur inside in the graphics processor core 602.

[0197] The inclusion of lightweight encryption engines, such as the first encryption engine 602d, second encryption engine 602e and N encryption engine 602f, may avoid some standards, such as the Advanced Encryption Standard (which may take up to 12 clock cycles to execute), and may be a drop-in replacement that provides performance enhancements. Some embodiments may include the key manager 602j that flexibly provisions the same tenant key across multiple fine-grain threads to achieve wide word sizes seamlessly for workloads that require a greater percentage of resources of the graphics processor core 602.

[0198] In some embodiments, the first encryption engine 602d, second encryption engine 602e and N encryption engine 602f may implement K-ciphers. Details are provided by Table I:

TABLE-US-00005 TABLE I Latency Number of Cipher Area (.mu.m.sup.2) (psec) clocks Frequency K-cipher Enc.- 614 613 3 4.9 Ghz 32, r = 2 K-cipher Enc.- 1875 767 3 3.9 GHz 64, r = 2

[0199] FIG. 6C illustrates a SIMD architecture 660. The encryption architecture 660 includes GPU cores 662 that includes lanes that process threads as described herein. The GPU cores 662 may be connected with encryption engines 664 to encrypt and decrypt data. A local memory 666 and/or device memory may store the encrypted data. In some embodiments, the local memory 666, encryption engines 664 and GPU cores 662 may be part of a same graphics processor, while the device memory 668 may be separate from the graphics processor.

[0200] FIG. 6D illustrates a method 670. FIG. 6D shows a method 670 that may provide enhanced and granular decryption and encryption. The method 670 may generally be implemented in any of the embodiments described herein, and may implement aspects of the key manager 602j and the first encryption engine 602d, second encryption engine 602e and N encryption engine 602f (FIGS. 6A and 6B) and/or be combined with the SIMD architecture 660 (FIG. 6C). In an embodiment, the method 670 is implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc., in configurable logic such as, for example, programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), in fixed-functionality logic hardware using circuit technology such as, for example, application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof.

[0201] For example, computer program code to carry out operations shown in the method 670 may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. Additionally, logic instructions might include assembler instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, state-setting data, configuration data for integrated circuitry, state information that personalizes electronic circuitry and/or other structural components that are native to hardware (e.g., host processor, central processing unit/CPU, microcontroller, etc.).

[0202] Illustrated processing block 672 receives a write request. Illustrated processing block 674 identifies a tenant associated with the write request. Illustrated processing block 676 determines whether a tenant encryption key exists for the tenant. If not, illustrated processing block 678 generates a new encryption key for the tenant. The encryption key may be stored in association with credentials of the tenant for further referencing. Illustrated processing block 680 encrypts the data according to the encryption key. Illustrated processing block 682 stores the encrypted data. While not illustrated, the encrypted data may be decrypted based on the encryption key and based on a request associated with a thread of the tenant that has the credentials.

[0203] FIG. 6E illustrates a method 690 to process read requests. The method 690 may generally be implemented in conjunction with any of the embodiments described herein, such as, for example the key manager 602j and the first encryption engine 602d, second encryption engine 602e and N encryption engine 602f (FIGS. 6A and 6B), method 670 (FIG. 6D), and/or be combined with the SIMD architecture 660 (FIG. 6C) already discussed. More particularly, the method 690 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof.

[0204] Illustrated processing block 692 receives a read request. Illustrated processing block 694 identifies a key (e.g., a tenant specific key) associated with a tenant of the read request. In some embodiments, the key may be identified based on credentials associated with the key and/or a requesting thread. Illustrated processing block 696 decrypts data according to the key. Illustrated processing block 698 sends the decrypted data to a requesting device (e.g., a graphics processor and/or lane).

[0205] FIG. 6F illustrates an encryption and storage process 3300. The process 3300 may generally be implemented in conjunction with any of the embodiments described herein, such as, for example process 600 (FIGS. 6A and 6B), method 670 (FIG. 6D), method 690 (FIG. 6E) and/or be combined with the SIMD architecture 660 (FIG. 6C) already discussed. In FIG. 6F, a plurality of execution cores 3302 of a GPU execute operations. The execution units may execute workloads associated with different contexts. A first core 3304, a second core 3306 and a third core 3310 may execute different workloads associated with different contexts. In this example, the first core 3304 may include execution units 3304a that produce data for different contexts and/or tenants. The data is encrypted by the lightweight cryptographic engines 3304b according to a first encryption (e.g., a first encryption key), a second encryption (e.g., a second encryption key), a third encryption (e.g., a third encryption key) and a fourth encryption (e.g., a fourth encryption key). For example, first workloads associated with a first tenant may be encrypted according to the first encryption, second workloads associated with a second tenant may be encrypted according to the second encryption, third workloads associated with a third tenant may be encrypted according to the third encryption, and fourth workloads associated with a fourth tenant may be encrypted according to the fourth encryption.

[0206] The first-fourth workloads may be distributed through the first core 3304, the second core 3306 and the third core 3308 resulting in the first-fourth encryptions being applied in each of the first core 3304, the second core 3306 and the third core 3308. For example, the lightweight cryptographic engines 3306b of the second core 3306 may encrypt data from the execution units 3306a according to the first-fourth encryptions, and the lightweight cryptographic engines 3308b of the third core 3308 may encrypt data from the execution units 3308a according to the first-fourth encryptions.

[0207] Similarly encrypted data may be associated with a same tenant. As such, the data encrypted according to the first encryption from the first core 3304, the second core 3306 and the third core 3308 may be concatenated together (e.g., in the GPU) to form a larger sized block width (e.g., 32 bits) for storage. The GPU may further concatenate together data that is encrypted according to the second encryption. The GPU may further concatenate together data that is encrypted according to the third encryption. The GPU may further concatenate together data that is encrypted according to the fourth encryption. The GPU may thus store the concatenated encrypted data 3314 in the cache 3316.

[0208] Thus, some embodiment may permit flexible workload distribution with encryption on a lane-by-lane basis. In some examples, each workload may execute within a same lane of the first core 3304, second core 3306 and third core 3308 for the lifetime of operations of the workload. In some embodiments, operations of a workload may be distributed through multiple lanes which are each encrypted according to a same key.

[0209] While not illustrated, when the encrypted data in the cache 3316 is to be decrypted, the GPU may access a data structure indicating how the data was encrypted (e.g., according to one of the first-fourth encryptions). An identifying element from the data may be used to identify the encryption scheme and with reference to the data structure.

[0210] Thus, embodiments support isolation and multitenancy and different levels of granularity (e.g., both vertically and horizontally isolated matrix multiplication). Moreover, some embodiments provide more flexibility to operate on code that multiplies matrices, while enforcing isolation. For example, in conventional approaches, a set scalar of fused multiply--adds (FMAs), which are part of the same vector (FMAs), need to be associated with the same workload, up to some minimum acceptable size to justify utilization of an entire execution unit (e.g., a certain number of lanes of the execution unit must be occupied to justify execution while still enforcing isolation). That is, isolation principles may not permit other workloads to execute on the execution unit to use unoccupied lanes. In some embodiments herein, each parallel scalar FMA, which is part of a vector FMA, may be associated with a different isolated workload executing on a same execution unit (multiple workloads execute on a same execution unit isolated in different lanes). The same principle may apply to other types of scalar operations.

[0211] Further, with respect to code streamers, a small(er) number of code streamers may not be a limitation. Embodiments may execute efficiently if the workloads that are isolated at a finer granularity execute the same code stream, which may be the case for machine learning (ML) workloads.

[0212] For example, ML workloads may be characterized by sequential matrix multiplications and fewer data dependent branches. Moreover, with regard to security considerations, lightweight 32-bit and 64-bit encryption systems may acceptably secure data. For example, Simon and Speck ciphers support 32-bit and 64-bit lengths. A PRINCE cipher performs encryption at 64-bit block granularity. Additionally, a key size in such ciphers may be much larger than the block size (e.g., a 32-bit block cipher approximates an ideal primitive, which is the 32-bit random permutation). The space of all 32-bit random permutations may be (232)!.about.=232,000,000,000 using Stirling's approximation and a 64- or 128-bit key defines a still cryptographically large space, which is not trivially searched. Thus, the lightweight cryptographic engines 3304b, 3306b, 3308b may implement lightweight ciphers (e.g., 32-bit or 64-bit) including Simon, Speck, PRINCE, K-cipher, etc. to provide a fine grain per SIMD lane isolation.

[0213] Moreover, with respect to security concerns, an intra-domain adversary may attempt to form dictionaries and launch dictionary-based attacks (i.e., use known plaintext-ciphertext pairs) and it may take about 232 efforts for such attacks when using the 32-bit cipher for instance. In the context of a GPU workload isolation, such considerations may be irrelevant because the intra-domain adversary may directly access or overwrite a victim's data. Moreover, an inter-domain adversary only observes ciphertexts. Accordingly, the cryptographic challenge is more difficult, and an adversary cannot determine which permutation is used for concealing the victim's data among the numerous choices.

[0214] Embodiments therefore provide flexibility in isolation granularity. For example, some embodiments involve per SIMD lane isolation and involve a lightweight cipher that may replaces the AES engine of conventional architectures for performance reasons.

[0215] Method of Deciding Whether Content is Artificially Generated (FIGS. 6G-6I)

[0216] Deep fakes (e.g., synthetic media in which a person in an existing image or video is modified or replaced) have become increasingly common as graphics processing power has increased. In an effort to curb the reach of deep fakes, machine learning (ML) algorithms may attempt to classify data as a deep fake or a genuine image. Doing so however may be problematic and prone to error. For example, consider a first false positive scenario that may include a claim. Suppose that an image A is genuine and not a deep fake. A "defender" may successfully generate a similar image B (may be similar to image A) using an ML algorithm and a number of ML models for the description of the entities involved (e.g., faces, voices, people, buildings etc.) The "defender" may thus successfully put forward a case that the content of A is a deep fake, whereas in reality the image A is genuine.

[0217] Consider second false positive example. Suppose that an image A genuine and not a deep fake. Suppose further that a defender successfully generates by means of artificial intelligence (AI) algorithm, an alleged "source" image B, which has content overlapping with A, but semantically conveys a different meaning (e.g., the same person X shakes hands with person Y as opposed to person Z). In reality, image B is a deep fake. Image B is created using an ML algorithm and a number of ML models for the description of the entities involved (e.g., faces, voices, people, buildings etc.). The defender successfully makes a case that some of the content of A was copy-pasted from the alleged source B or inserted using a range of valid transformations. In this case A is shown to be synthetic (a deep fake), even though it is not.

[0218] Now consider a first false negative scenario. Suppose that an image A is a deep fake. Specifically, it is synthetically generated using simple transformations from a source image B. A "defender" may present the true source image B from which some of the content was copy-pasted or extracted using a range of valid transformations. An "offender", however, may make a case that the true source B is also a deep fake by presenting an ML algorithm and a set of ML models that synthetically generate the content of true source image B, thus falsely indicating that the true source image B is a deep fake.

[0219] Now consider a second false negative scenario. An image may be a deep fake. A "defender" presents the true source image "B" from which some of the content was copy-pasted or extracted using a range of valid transformations. An "offender" produces additional content (e.g., audio files, video files, etc.) all of which are synthetically generated, and which the offender claims as original, that are semantically linked with the deep fake image and indicates falsely that it is a valid image.

[0220] As such, there may be significant difficulty in detecting deep fakes. The examples above indicate that as AI and/or ML algorithms evolve and result in more convincing synthetic content, the more difficult it is to detect deep fakes. For example, an average person would be unable to detect a deep fake in a practical manner.

[0221] Turning now to FIG. 6G, a performance enhanced computing architecture 3400 is shown. Some embodiments relate to a concept, schematics and functionality of a "Distributed Trustworthiness Record" (DTR) 3404, which is a data structure that may be used by rating agency 3408 to compute trustworthiness scores for content 3402. The content 3402 may be any type of content, such as video, audio, etc.

[0222] A plurality of applications 3406 (e.g., different sources) may enter claims into the DTR 3404. For example, a first application 3406a may enter a claim into entry 1 3404a, a second application 3406b may enter a claim into entry 2 3404b, a third application 3406c may enter a claim into entry 3 3404c, and an N application 3406n may enter a claim into entry N 3404n. Thus, the DTR 3404 may include a plurality of claims from various application 3406 (e.g., different sources of analysis).

[0223] Notably, some of the claims may not be consistent with each other. For example, the first application 3406a may enter a claim that states that the content 3402 is a deep fake, while the third application 3406c may enter a claim that indicates that the content 3402 is genuine. Each of the claims however may include an indication of whether the content 3402 is fake or genuine, and how the indication was reached. The rating agency 3408 may analyze the claims and output a trustworthiness score that corresponds to whether the content 3402 is genuine or fake.

[0224] If there are conflicting "opinions" about content 3402, they are all inserted in the DTR 3404 and the rating agency 3408 (e.g., an ML algorithm) analyzes the claims (including the conflicting claims) for a score computation that generates a trustworthiness score. Thus, the DTR 3404 allows the plurality of applications 3406 (e.g., defenders) to place suggestions for the true source of a content 3402 together with the list of suggested transformations (e.g., machine algorithms) that produced the content.

[0225] In some embodiments, the DTR 3404 may be completely distributed (e.g., a distributed ledger provided across a plurality of nodes) or a centralized data structure. The term "distributed" in the name DTR 3404 refers to the notion that the DTR 3404 may be accessed by multiple parties for inputting claims and/or analyzing claims. In some embodiments, the DTR 3404 and/or the rating agency 3408 may implement access control functions and access control policies when accessing the DTR 3404. DTR 3404 may be accessed by the rating agency 3408, or in some embodiments, a plurality of rating agencies.

[0226] The rating agency 3408 may be a ML algorithm and/or other evaluation algorithms. The rating agency 3408 may inspect the entries 1-N 3404a-3404n of the DTR 3404 and compute a trustworthiness score for the content 3402. The trustworthiness score may correspond to whether the content 3402 is real or a deep fake.

[0227] In some embodiments, when one application of the plurality of applications 3406 inserts a claim into the DTR 3404, the one application may also be required to provide a ML model that generated the analysis (e.g., real or deep fake), an algorithm that was used in the analysis of the content 3402, a code that implements aspects of the analysis of the content 3402, the analysis (e.g., whether real or deep fake) of the content 3402, and a date and duration of any related experiment (e.g., an experiment may include a set of transformations [legitimate or malicious] applied on an original image to convert it to the one that is being classified). Training data may also be provided in some embodiments as part of the claims. Algorithms involved in any experiment described in the claim and a corresponding entry of the DTR 3404 may include non-ML algorithms performed on image data such as translation operations, rotation operations, scaling operations, lighting operations, color correction operations, sharpening operations, blurring operations to determine whether the content 3402 is genuine or fake. Some embodiments may further include ML algorithms used to analyze the content 3402 such as synthetic generation based on deep neural networks (DNNs), generative adversarial networks (GANs) etc. to determine whether the content 3402 is genuine or fake.

[0228] An example of entry 1 3404a (e.g., first claim) is shown in more detail. Claim 1 suggests that an image of the content 3402 is a fake (e.g., specifically "copy-pasted" from some source). There is a link to an original source with a valid list of non-ML transformations, indicating how the content extraction was executed.

[0229] An example of the entry 2 3404b (e.g., second claim) suggests that the image is original. Further entry 2 3404b may include a link to an ML algorithm (e.g., a reproductive algorithm) and model capable of reproducing the alleged "source" of entry 1 3404a (e.g., claim 1) synthetically.

[0230] An example of the entry 3 3404c suggests that the image of the content 3402 is a deep fake. Entry 3 3404c includes a link to an ML algorithm and model, capable of reproducing the image of the content 3402 synthetically.

[0231] An entry N 3404n, which is the last entry in the DTR 3404, suggests that the image is original. Entry N 3404n includes is a link to an audio file independently recorded that semantically conveys the same information as the image.

[0232] All data in entry 1 3404a-entry N 3404n (e.g., the analysis, the algorithms, the suggested durations and the nature of claims) are evaluated by the rating agency 3408. In some embodiments, the rating agency 3408 is one of a heuristics-based on an ML algorithm, and returns the trustworthiness score for the content 3402. In some embodiments, the evaluation by the rating agency 3408 may include a human input (e.g., adjustment of an algorithm, etc.) as well.

[0233] The rating agency 3408 includes a GPU 3408a that may execute a neural network or deep learning process. In some embodiments, the rating agency 3408 evaluates all claims contained in the DTR 3404 to determine a trustworthiness score. In some embodiments, the rating agency 3408 is completely automated. In some embodiments, an action may automatically be executed based on the trustworthiness score. For example, if the content 3402 relates to a biometric authentication (e.g., voice, audio, fingerprint, facial recognition, etc.) of a user. If the trustworthiness corresponds to a deep fake, further authentications may be executed to confirm if the user is genuine, and/or blocking the user from accessing certain functions associated with a computing device.

[0234] FIG. 6H illustrates an entry 3420 in a DTR. The entry 3420 may correspond to any of the aforementioned entry 1 3404a-entry N 3404n (FIG. 6G) already discussed. The entry 3420 may include a claim summary 3420a (e.g., whether content is a deep fake or genuine). The claim summary 3420a may be encoded. The entry 3420 may include an algorithmic information 3420b (e.g., an algorithm used in the analysis, for example reproducing the content or confirming the authenticity of the content, an ID of the algorithm, type of algorithm, and a link to the code of the algorithm). The entry 3420 may include supporting data 3420c (e.g., image, audio, ML models), training data 3420d, a date 3420e that the analysis was conducted, a proof of work 3420f (e.g., provides a fair mechanism for accessing the ledger allowing all opinions to be inserted) and a cryptographic authentication 3420g.

[0235] In some embodiments, the more original (e.g., unique) the supporting content 3420c, the more computationally difficult it is to dispute the supporting content 3420c. For example, it may be difficult to show that the supporting content 3420c is the result of an expensive ML computation. In such cases, a rating agency may identify that the entry 3420 is associated with a correct analysis and weight the entry 3420 with an increased weight when computing the trustworthiness score.

[0236] In some embodiments, the difficulty of a cryptographic puzzle may be a function of the ML computing capability indicated in the claims. If an entity is capable of making claims by producing synthetic images, the proof of work 3420f required for inserting content in the ledger at entry 3420 for this entity should be higher

[0237] FIG. 6I illustrates a method 3500 to enter data into a ledger. The method 3500 may generally be implemented in conjunction with any of the embodiments described herein, such as, for example the architecture 3400 (FIG. 6G) already discussed. More particularly, the method 3500 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof.

[0238] Illustrated processing block 3502 receives a first claim from a third-party. The claim may be associated with content. Illustrated processing block 3504 determines if the third-party has already submitted a claim. If so, illustrated processing block 3506 determines if the requirements for re-entry have been met. Illustrated processing block 3506 may prevent entities from dominating the ledger with claims biased toward one specific way of perceiving the content (e.g., fake or not). For example, the requirements may include a greater proof of work, whether a time difference between the submitted claim and the first claim meets a threshold, etc. If the requirements have been met, illustrated processing block 3510 enters the first claim into the DTE. Otherwise, processing block 3508 bypasses entry of the claim.

[0239] Method 3500 may be applicable for a centralized data repository (DB). In case of a distributed ledger, some embodiments may accept repeating entries from the same entity. A rating agency, such as rating agency 3408 of FIG. 6G, may include logic that counts only the latest claim form every party in the event that the distribute leger is a centralized data repository.

[0240] Unified Platform Memory Encryption (FIGS. 7A-7D)

[0241] Some embodiments share cryptographic logic between a CPU and platform accelerators. For example, a computing architecture 1150 illustrated in FIG. 7A may reduce computing resources, power usage and area size while enforcing trust domain boundaries. The computing architecture 1150 may include accelerators 1152 that include first accelerator 1152a-fourth accelerator 1152d. The accelerators 1152 may be specialized accelerators for different applications, such as deep learning, GPU accelerator, or may be general accelerators. Each of the accelerators 1152 may be specialized for a different purpose. The accelerators 1152 may require cryptographic protection of data stored in memory 1174 for secure usages (e.g., multi-tenant situations, virtual machines operating concurrently, protected content such as digital rights management (DRM)). Hence, some embodiments include cryptographic support at a reduce hardware cost size and enhanced efficiency.

[0242] For example, rather that providing the desired cryptographic support within each of the accelerators 1152, some embodiments include a centralized converged cryptographic engine (CCE) 1160 and a secure path between the accelerators 1152 and the CCE 1160. For example, a CPU 1162 and any other elements outside of a trust computing base (TCB) associated with the accelerators 1152 may not have access to the path between the CCE 1160 and the accelerators 1152 to enhance security. In such a manner, unencrypted data between the accelerators 1152 and CCE 1160 may not be intercepted and read by compromised hardware such as the CPU 1162. Thus, some embodiments may enhance security while reducing the size for executing cryptographic operations of the accelerators 1152. For example, if each of the accelerators 1152 included separate encryption hardware, the size (e.g., number of gates) may substantially increase resulting in increased power consumption and cost. The addition of further security properties (e.g., integrity and replay protection), may add significantly to the overall area and/or power and the effect may be amplified with the same logic getting replicated across various accelerators 1152. As such, some embodiments include a centralized CEE 1160.

[0243] Thus, some embodiments include a method and apparatus to unify the cryptographic support for the accelerators 1152 using the CCE 1160. The CCE 1160 is interposed on the memory path between the memory 1174 and the accelerators 1152 to encrypt and decrypt data.

[0244] For example, a trusted execution environment (TEE) 850 and/or any other secure element (e.g., Basic Input/Output System and/or Unified Extensible Firmware Interface) of the architecture 1150 may partition keys 852 (e.g., a KeyID partitioning scheme) to create a first key domain 1148a (e.g., encryption keys) and a second key domain 1148b (e.g., encryption keys). In some embodiments, the TEE 850 further assigns keys to the CPU 1162. Moreover, some embodiments may permit a centralized update to the CEE 1160 rather than requiring a plurality of distributed crypto engines to be updated.

[0245] For example, the TEE 850 may implement a key partitioning scheme to partition the keys between different trust domains. The TEE 850 may generate the first key domain 1148a for a first trust domain, and the second key domain 1148b for a second trust domain. The first trust domain may include the first and second accelerators 1152a, 1152b, while the second trust domain may include the third and fourth accelerators 1152c, 1152d. While the first and second trust domains are shown as distinct from each other, in some embodiments the first and second trust domains may overlap. For example, the one or more of accelerators 1152 may each include a first plurality of intellectual property (IP) cores (e.g., reusable unit of logic or functionality or a cell or a layout design) that are in the first trust domain, and a second plurality of IP cores in the second trust domain.

[0246] In some embodiments, a single IP core of the accelerators 1152 may be in both the first and second trust domains to process data for both the first and second trust domains. In such embodiments, the CCE 1160 may process data from the single IP core based on an indication of whether the data is associated with the first or second trust domain. The indication may be inserted by the IP core and/or implicit in the data itself based on associated address ranges or other identifiers. While IP core is referenced above, it is to be understood that execution units and/or other cores are similarly included.

[0247] The CCE 1160 may isolate key usage between the first and second trust domains, and the CPU 1162. For example, keys of the first key domain 1148a may not be used to encrypt data of the second domain or the CPU 1162, and keys of the second key domain 1148b may not be used to encrypt data from the first trust domain or the CPU 1162. Thus, data of a respective trust domain may only be encrypted according to keys assigned to the respective trust domain. In this example, data of the first trust domain may only be encrypted according to keys in the first key domain 1148a, while data of the second trust domain may only be encrypted according to keys in the second trust domain.

[0248] For example, the CCE 1160 and/or TEE 850 may actively block other hardware elements from accessing and/or using keys associated with different trust domains that the hardware does not belong within. In one example, the TEE 850 may block access to one or more keys in the first or second key domains 1148a, 1148b, by the CPU 1162 through allocation of the keys to the first and second trust domains, and to bypass allocating the keys to the CPU 1162. Thus, the CPU 1162 is effectively blocked from decrypting data associated with the first and second trust domains in the memory 1174 and may only see ciphertext since the CPU 1162 does not have access to the keys.

[0249] In some embodiments, the CCE 1160 and/or the TEE 850 may include an access control scheme that is implemented by embodiments to prevent the CPU 1162 from using keys that are dedicated to the first and second trust domains, such as the first key domain 1148a and the second key domain 1148b respectively. Access control may be supported by two access control mechanisms preventing the CPU 1162 from programming a first key domain 1148 (e.g., GFx KeyID) and controlling certain commands from the CPU 1162 (e.g., key programming instruction, PCONFIG, fails if software attempts programming a graphics key). In some embodiments, a hardware element may check that the KeyID of a request from the CPU 1162 does not fall in the first key domain 1148 range or second key domain 1148b range to block the CPU 1162 from accessing unauthorized keys.

[0250] In this example, the first and second accelerators 1152a-1152b are part of the first trust domain (e.g., a first power constrained part such as a PC, etc.), and the third and fourth accelerators 1152c, 1152d belong to a second trust domain (e.g., a power constrained part such as a PC, etc.). The CPU 1162 may be part of a third trust domain (e.g., a host operating system, a third virtual machine, etc.) that is allocated a key domain (not illustrated) as well for encryption by the CEE 1160. The CCE 1160 may receive data from the first and second trust domains, encrypt the data and provide the encrypted data to the memory controller 1174a in order to isolate the first and second trust domains from each other and the CPU 1162.

[0251] For example, the first accelerator 1152a may send a first memory write operation and data request operation 1164 to the CCE 1160. The second accelerator 1152b may send a second memory write operation 1166 to the CCE 1160. The third accelerator 1152c may send a third memory write operation 1168 to the CCE 1160. The fourth accelerator 1152d may send a fourth memory write operation 1170 to the CCE 1160.

[0252] The CCE 1160 may receive the requests from the accelerators 1152. The CCE 1160 may identify whether the data originates from the first or second trust domain, and encrypt the data accordingly. For example, the CCE 1160 may identify that the first memory write operation originates from the first accelerator 1152a and identify that the first accelerator 1152a is part of the first trust domain. Since the first trust domain is permitted to use keys from the first key domain 1148a, the CCE 1160 may select one of the keys from the first key domain 1148a to encrypt data associated with the first memory write operation. Thus, the CCE 1160 may encrypt data from the first accelerator 1152a with keys from the first key domain 1148a.

[0253] Likewise, the CCE 1160 may identify that the second memory write operation originates from the second accelerator 1152b and identify that the second accelerator 1152b is part of the first trust domain. Since the first trust domain is permitted to use keys from the first key domain 1148a, the CCE 1160 may select one of the keys (e.g., the same key or a different key used to encrypt the data associated with the first memory write operation) from the first key domain 1148a to encrypt data associated with second memory write operation. Thus, the CCE 1160 may encrypt data from the second accelerator 1152b with keys from the first key domain 1148a.

[0254] Similarly, the CCE 1160 may identify that the third memory write operation originates from the third accelerator 1152c and identify that the third accelerator 1152c is part of the second trust domain. Since the second trust domain is permitted to use keys from the second key domain 1148b and not the first key domain 1148a, the CCE 1160 may select one of the keys from the second key domain 1148b to encrypt data associated with third memory write operation. Thus, the CCE 1160 may encrypt data from the third accelerator 1152c with keys from the second key domain 1148b.

[0255] Likewise, the CCE 1160 may identify that the fourth memory write operation originates from the fourth accelerator 1152d and identify that the fourth accelerator 1152d part of the second trust domain. Since the second trust domain is permitted to use keys from the second key domain 1148b, the CCE 1160 may select one (the same key or different key used to encrypt the data associated with the third memory write operation) of the keys from the second key domain 1148b to encrypt data associated with fourth memory write operation. Thus, the CCE 1160 may encrypt data from the fourth accelerator 1152d with keys from the second key domain 1148b.

[0256] The CCE 1160 may then send first and second memory writes encrypted (e.g., the encrypted data of the first and second memory writes) according to one or more encryption keys of the first key domain 1148a, 1176 to the memory controller 1174a. The memory controller 1174a may store the encrypted data of the encrypted first and second memory writes in the memory 1174. The CCE 1160 may then send the third and fourth memory writes encrypted (e.g., the encrypted data of the third and fourth memory writes) according to one or more encryption keys of the second key domain 1148b, 1178 to the memory controller 1174a. The memory controller 1174a may store the encrypted third and fourth memory writes in the memory 1174.

[0257] As already described, the first accelerator 1152a may also issue a data request to the CCE 1160 during operation 1164. The CCE 1160 may serve as an intermediary between the accelerators and the memory 1174, and as such request the data 1182 from the memory controller 1174a on behalf of the first accelerator 1152a and in response to the data request from the first accelerator 1152a. The memory controller 1174a may retrieve the data from the memory 1174 and send the encrypted data in response to the data request 1184. The CCE 1160 may identify that the data is associated with the first trust domain, since the first accelerator 1152a originated the data request and is part of the first trust domain, and decrypt the data based on an encryption key from the first key domain 1148a. For example, the CCE 1160 may include a data structure identifying a key used to encrypt data associated with the write requests. The data structure may be referenced when data is retrieved from the memory 1174 to identify an encryption key that was used to encrypt the data, and decrypt the data based on the encryption key. Thus, the CCE 1160 may decrypt the data retrieved from the memory 1174 and send the decrypted data 1186 to the first accelerator 1152a.

[0258] In some embodiments, the architecture 1150 may include a bypass path to prevent penalizing other memory traffic when a CCE 1160 (e.g., TME/MKTME) is not enabled. As noted, the CCE 1160 may further decrypt encrypted data from the memory 1174 for the accelerators 1152 based on the lookup table, and an encryption key used to encrypt the data. While the above has been described with respect to accelerators 1152, it is to be noted that the CCE 1160 and/or TEE 850 may operate similarly with other hardware elements, such as CPU 1162, to process encryption of data between different trust domains. Further, to process read requests, aspects of the above process may be reversed as already described to retrieve encrypted data associated with a trust domain of the first and second trust domains, and decrypt the data accordingly. In some embodiments, the CCE 1160 may be a hardware element that is part of a same system-on-chip (SoC) as the accelerators 1152.

[0259] It is worthwhile to note additionally, the location of the CCE 1160 may be flexible. For example, in some embodiments the CCE 1160 is separate from the accelerators 1152. In some embodiments, the CCE 1160 may be a part of one of the accelerators 1152 and the other accelerators of the accelerators 1152 may communicate with the CCE 1160 through secured channels.

[0260] FIG. 7B illustrates a method 1190 to encrypt data and decrypt data according to various trust domains. The method 1190 may generally be implemented in conjunction with any of the embodiments described herein, such as, for example the architecture 1150 (FIG. 7A) already discussed. More particularly, the method 1190 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof.

[0261] Illustrated processing block 1192 partitions keys between trust domains. Illustrated processing block 1194 isolates key accesses between trust domains. Illustrated processing block 1196 receives a data write from a first trust domain of the trust domains. Illustrated processing block 1198 encrypts data associated with the data write with a key that is assigned to the first key domain. For example, the first key domain may be assigned to the first trust domain. Thus, method 1190 may select a key from the first key domain for the data that originates from the first trust domain. Illustrated processing block 800 writes encrypted data to memory. Illustrated processing block 802 receives a data read request from a second trust domain of the trust domains. Illustrated processing block 804 retrieves encrypted data identified by the read request and decrypts the encrypted data according to a key assigned to the second trust domain (different from the key assigned to the first trust domain) and that was used to encrypt the encrypted data. Illustrated processing block 806 sends the decrypted data to the second trust domain.

[0262] FIG. 7C illustrates a method 810 of a granular encryption scheme that encrypts data from different cores of an accelerator and/or CPU with different keys based on trust domains. The method 810 may generally be implemented in conjunction with any of the embodiments described herein, such as, for example the architecture 1150 (FIG. 7A), method 1190 (FIG. 7B) already discussed. More particularly, the method 810 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof.

[0263] Illustrated processing block 812 receives a first data access from a first IP from an accelerator. The first data access may be a write operation for first data. Illustrated processing block 814 identifies that the first IP is in the first trust domain. Illustrated processing block 816 encrypts the first data with a key for the first trust domain and stores the encrypted first data. Illustrated processing block 818 receives a second data access from a second IP of the accelerator. The second data access may be a write operation for second data. The first and second IP are on the same accelerator (e.g., a GPU, etc.). Illustrated processing block 820 identifies that the second IP is in a second trust domain. The second trust domain is different from the first trust domain. Illustrated processing block 822 encrypts the second data with second key from the second trust domain and stores the encrypted second data. The above method 810 may be implemented in one or more of the CCE 1160 or TEE 850 (FIG. 7A) to operate in conjunction with a plurality of accelerators each including IP assigned to different trust domains.

[0264] FIG. 7D illustrates a method 840 of encrypting data from a same accelerator and/or CPU with different keys based on trust domains. The method 840 may generally be implemented in conjunction with any of the embodiments described herein, such as, for example the architecture 1150 (FIG. 7A), method 1190 (FIG. 7B), method 810 (FIG. 7C) already discussed. More particularly, the method 840 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof.

[0265] Illustrated processing block 842 identifies data accesses from an accelerator. Illustrated processing block 844 identifies that the accelerator is in a plurality of trust domains. Illustrated processing block 846 identifies a tag associated with the data identifying a first trust domain of the plurality of trust domains. Illustrated processing block 848 encrypts data associated with the data with a key for the first trust domain. The above method 840 may be implemented in one or more of the CCE 1160 or TEE 850 (FIG. 7A) to operate in conjunction with a plurality of accelerators each operating within different trust domains.

[0266] Cryptographic Per Object Shared Virtual Memory Model for CPU/GPU Security (FIGS. 7E-7I)

[0267] Some embodiments may relate to a GPU and a CPU sharing data objects, such as HEAP data objects (e.g., a malloc associated object), and cryptographically defining bounds and security enforcement through cryptographic processes. Some embodiments further prevent use after freeing data objects, by binding encoded cryptographic address (e.g., pointer related data) with data encryption at a core and/or execution unit execution pipeline.

[0268] Some other implementations may rely on coarse grain security to encrypt just memory (e.g., address space separation such as in processes and/or virtual Machines, TEE) and may not granularly vary encryption based on a per object basis. Other implementations that operate at a finer granularity may be inefficient and introduce excessive metadata. For example, a metadata "wall" may include additional overhead for every granular memory access.

[0269] Turning now to FIG. 7E, a granular encryption process 3100 may mitigate software and/or hardware based vulnerabilities with an encryption scheme that may vary per object while avoiding excessive metadata. For example, some embodiments may employ a two part encryption process to firstly encrypt a virtual address, and then further encrypt data associated with the virtual address based on the encrypted virtual address (e.g., a shared virtual address scheme which permits sharing of objects between CPU and GPU). Doing so may enhance security at a relatively low cost by requiring an actor to have access to the encrypted virtual pointer and the encrypted data in order to successfully access decoded data.

[0270] In some embodiments, the CPU may encrypt data and virtual addresses in a process specific manner. For example, a first process may have data and virtual address encrypted according to a first key, a second process may be encrypted according to a second key, etc. The tweaks however may vary as indicated below.

[0271] As shown, a first encrypted virtual address 3138 is provided. For example, a CPU may encrypt a virtual address (e.g., a pointer) according to a key and a tweak based on the virtual address (e.g., fields such as address bits, object characteristics such as size, type, location, ownership, access control, permissions, stack location, data binding, etc.) to generate the first encrypted virtual address 3138. The CPU may share with an authorized actor 3120 (e.g., a GPU) the key and the tweak used to encrypt the virtual address to generate the first encrypted virtual address 3138 or generate an encrypted portion of virtual address 3138.

[0272] The first encrypted virtual address 3138 may be a ciphertext of the virtual address. As illustrated, the authorized actor 3120 (e.g., the GPU) may access the first encrypted virtual address 3138. The authorized actor 3120 may decrypt the address 3116 with a first key so that the first encrypted virtual address 3138 is decrypted into the first address 3102 (e.g., a virtual address). The first address 3102 may point to the first encrypted data 3124. Notably, the first encrypted data 3124 may still be encrypted. The authorized actor 3120 may correctly decrypt the first encrypted data 3124 based at least on the first encrypted virtual address 3138, 3110 to generate decrypted data 3108 (e.g., a data object). For example, the first encrypted virtual address 3138 may be used as a tweak and/or a decryption key in the decryption of the first encrypted data 3124.

[0273] In some embodiments, the first encrypted data 3124 may also be decrypted based on the first key and/or the tweak described above to encrypt the first encrypted virtual address 3138. In some embodiments, the key used to encrypt the first encrypted data 3124 may be different from the key used to encrypt the first encrypted virtual address 3138. For example, a decryption engine may implement a decryption process based on the key and one or more values of the first encrypted virtual address 3138.

[0274] A first unauthorized actor 3118 (e.g., a software program) may also decrypt address 3114 into the second address 3104 (e.g., a virtual address). That is, the first unauthorized actor 3118 may incorrectly decrypt the first encrypted virtual address 3138 to the second address 3104 since the first unauthorized actor 3118 is unaware of the key and/or tweak used to encrypt the first encrypted virtual address 3138. That is, the first unauthorized actor 3118 may not have access to the first key to properly decrypt the first encrypted virtual address 3138 to the proper address, which in this example is the first address 3102. Thus, the first unauthorized actor 3118 may decrypt the first virtual address 3138 improperly to the second address 3104, which points to the second encrypted data 3122.

[0275] The second address 3104 may point to the second encrypted data 3122. The first unauthorized actor 3118 may incorrectly decrypt the second encrypted data 3122, 3112. For example, the first unauthorized actor 3118 may not have access to the key that was used to encrypt the second encrypted data 3122 and/or a second encrypted virtual address that corresponds to (e.g., points to) the second address 3104. For example, the CPU may have encrypted the second address 3104 to the second encrypted virtual address. One or more values of the second encrypted virtual address may have been used to encrypt the second encrypted data 3122, and may be necessary for proper decryption of the second encrypted data 3122. The first unauthorized actor 3118 however may have identified the second address 3104 based on the first encrypted virtual address 3138 and not the second encrypted virtual address, and thus be unable to decrypt the second encrypted data 3122. As such, the first unauthorized actor 3118 may incorrectly decrypt the second encrypted data 3112 to generate inaccurate data 3106, thereby being blocked from identifying useful data through the two-part decryption described above.

[0276] A second unauthorized actor 3126 (e.g., a software program) may conduct an attack (e.g., a buffer overflow attack) based on the third address 3128, 3130. For example, the second unauthorized actor 3126 may access a third address 3128 and increment the third address 3128 to reach the first address 3102. As discussed, the first address 3102 corresponds to the first encrypted virtual address 3138. The first address 3102 further points to the first encrypted data 3124. Notably, since the second unauthorized actor 3126 is unaware of the first encrypted virtual address 3138, and particularly the relationship between the first encrypted virtual address 3138 and the first address 3102, the second unauthorized actor 3126 may be unable to properly decrypt the first encrypted data 3124. That is, the first encrypted data 3124 is encrypted according to the one or more values of the first encrypted virtual address 3138. Since the second unauthorized actor 3126 is unaware of the first encrypted virtual address 3138, the second unauthorized actor 3126 conducts a decryption process without the first encrypted virtual address 3138, 3132 and/or the key used to encrypt the first encrypted data 3124. Thus, the second unauthorized actor 3126 may generate inaccurate data 3134.

[0277] Thus, process 3100 may permit accesses by authorized actor 3120. The first unauthorized actor 3118 and the second unauthorized actor 3126 may be blocked. Notably, process 3100 may execute for each object in a HEAP. Further, while it is described that the CPU encrypts data, some embodiments may include the GPU encrypting data and sharing key with the CPU. Some embodiments may also relate to cryptographic computing (e.g., cryptographic capabilities).

[0278] Some embodiments share cryptographic pointers/data across resources. The illustrated approach uses shared virtual memory (SVM) to have a common addressing model. Pointers (linear addresses) are then shared between CPU, GPU, VPU, etc. Additionally, pointers (e.g., virtual addresses) are cryptographically encoded and a tweak key is used to decrypt encrypted data. Pointers encode power of two bounds and a version used to encrypt every object uniquely from every other spatially and temporally. Keys may be associated with page tables or contexts for shared virtual memory, such that switching page tables or contexts will also switch the corresponding keys used to encrypt the pointers and data for different contexts or page table mappings.

[0279] FIG. 7F illustrates a method 3000 of decrypting data with a GPU. The method 3000 may generally be implemented in conjunction with any of the embodiments described herein, such as, for example the process 3100 (FIG. 7E) already discussed. More particularly, the method 3000 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof.

[0280] Illustrated processing block 3002 decrypts an encrypted memory address (e.g., virtual address). The decryption may execute with a tweak of the memory address, and an encryption key. The encrypted memory address may be a pointer, and cryptographically encodes an object size and/or location. A CPU, which encrypted the memory address may provide the tweak and the key to the GPU. Illustrated processing block 3004 loads ciphertext from the decrypted memory address. Illustrated processing block 3006 deciphers the ciphertext based on the encrypted memory address. In some embodiments, illustrated processing block 3006 further deciphers the ciphertext based on an encryption key with the encrypted memory address service as a tweak (e.g., encrypt the address-based tweak with the key to generate a keystream that is XORed with the ciphertext data for that address to reveal the plaintext data). Illustrated processing block 3008 executes an operation based on the decrypted data.

[0281] Method 3000 may execute without additional registers and/or cache, additional memory overhead (e.g., tables), and without added additional loads/stores. Method 3000 may further flexibly mitigate evolving threats with little to no performance impact and minimal recoding recompilation. It is worthwhile to note that as more attributes of the virtual address are used as a tweak to encrypt the virtual address (e.g., address bits, object data such as size, type and location, ownership, access control, permissions, etc.), strength of the encryption may increase.

[0282] FIG. 7G illustrates an encryption and decryption process 3200 that may be implemented with a GPU. Process 3200 may implement aspects of and/or be incorporated into any of the embodiments described herein, including process 3100 (FIG. 7E), and method 3000 (FIG. 7F). A cryptographic address (CA) 3202 is illustrated. The size information 3202a may identify a size of a number of tweak bits that are to be used for decryption. The size information 3202a may indicate the number of bits from the first address bit 3202c and onward (e.g., unshown address bits), that are to be used for the tweak 3214. In some embodiments, the tweak 3214 does not include the bits used for pointer arithmetic, which is the N addressing bit 3202n in this example. In some embodiments, pointer arithmetic may traverse through the first address bit 3202c-N addressing bit 3202n. The process 3200 may provide a key 3206 (e.g., from a CPU), that was used to encrypt the cryptographic address 3202, to the decryption engine 3210 (e.g., a k-cipher). The process 3200 may also provide the CA 3202, 3208 to the decryption engine 3210. Decryption engine 3210 may output the decryption 3212 as the decrypted linear address 3216.

[0283] The decrypted linear address 3216 corresponds (e.g., points to) to HEAP object 3218 which is stored in the 128B slot. The process 3200 retrieves the ciphertext 3220 of the HEAP object 3218 and provides the ciphertext to the cryptographic engine 3222. The cryptographic engine 3222 (e.g., Gimli) may receive a tweak 3204, for example a different tweak provided to the decryption engine 3210, to execute decryption of the ciphertext. The tweak may be the entire CA 3202. The cryptographic engine 3222 may also receive a key 3224, for example the same or a different key provided to the decryption engine 3210. The cryptographic engine 3222 may decrypt the ciphertext to generate plaintext data 3226. The GPU may then execute operations with the plaintext data.

[0284] In some embodiments, process 3200 may be executed by a GPU. Notably, since the GPU may generate the decryption engine (e.g., set up) and the cryptographic engine 3222 in parallel. For example, since the cryptographic engine 3222 does not need the decrypted linear address 3216 for decryption, the cryptographic engine 3222 may be initiated when the CA 3202 is identified to thereby reduce loading and initiation (e.g., configure configurable hardware logic).

[0285] FIG. 7H illustrates a cryptographic cache with a cryptographic diffusion and confusion 2580 with a comparison to adversary types. Embodiments of FIG. 7H may implement aspects of process 3100 (FIG. 7E), method 3000 (FIG. 7F), and process 3200 (FIG. 7G).

[0286] FIG. 7I illustrates a sharing cryptographic pointer data and/or across resources diagram 2584. Diagram 2584 may implement aspects of process 3100 (FIG. 7E), method 3000 (FIG. 7F), and process 3200 (FIG. 7G) and cryptographic diffusion and confusion 2580 (FIG. 7H). For example, some embodiments use a shared virtual memory (SVM) to have common address model. Some embodiments also include pointers (e.g., linear addresses) that are then shared between various units such as the CPU, GPU, VPU, etc. Some embodiments further include pointers that are cryptographically encoded and used a tweak key to decrypt encrypted data. Furthermore, some embodiments include pointers that encode power of two bounds and version used to encrypt every object uniquely from every other object both spatially and temporally.

[0287] Roots-of-Trust (RoT) in GPU Compute Engines (FIGS. 8A-8D)

[0288] A root-of-trust (RoT) in a graphics processing unit (GPU) may include reliable hardware, firmware, and/or software components that execute security functions. The RoT may be inherently trusted, and thus must be secure by design. Therefore, some RoTs are implemented in hardware so that malware cannot tamper with the functions they provide. Thus, RoTs may reliably affirm security boundaries between different tenants. For example, each tenant may verify the security status of a GPU to verify that the GPU is not compromised prior to executing a workload by another tenant and/or a malware.

[0289] In detail, FIG. 8A may illustrate a tenant-based processing environment 700 in which a GPU 736 may execute operations on behalf of a tenant 714. While one tenant 714 is illustrated, it will be understood that the GPU 736 may support multiple tenants concurrently, with each tenant verifying the security of the GPU 736 as outlined below to verify that the GPU 736 is not compromised (e.g., physically modified and/or compromised by another tenant). The GPU 736 may include a plurality of GPU compute engines 702 that include first compute engine-N compute engine 702a-702n. The first compute engine-N compute engine 702a-702n may become attack engines if compromised. Thus, in some embodiments, the first compute engine-N compute engine 702a-702n, GPU RoT 704, first target environment 706, second target environment 708 and third target environment 712 may be designed with a "RoT" hardware for generating attestable identity and boot integrity information (e.g., claims).

[0290] For example, each of the first compute engine-N compute engine 702a-702n, GPU RoT 704, first target environment 706, second target environment 708 and third target environment 712 (e.g., various engines) may have a RoT in hardware or has an isolated execution environment where the environment resources (e.g., compute, memory, storage, I/O, etc.) are partitioned by a RoT such as Trust Domain Extensions, Software Guard Extensions, a hypervisor, a resource manager (e.g., Resource Director Technology (RDT)). Each isolated execution, such as the first compute engine-N compute engine 702a-702n, GPU RoT 704, first target environment 706, second target environment 708 and third target environment 712 (e.g., various engines), may have firmware that is loaded and may have keys or seeds for generating keys provisioned and where the loading/provisioning of these values may be derived from a primitive hardware RoT. The intermediate layering of first compute engine-N compute engine 702a-702n, GPU RoT 704, first target environment 706, second target environment 708 and third target environment 712 may follow the conventions outlined by the DICE layering specification (e.g., a Trusted Computing Group (TCG)).

[0291] The first compute engine-N compute engine 702a-702n may both attest and verify integrity state of peer engines 748 and other engines of the first compute engine-N compute engine 702a-702n before performing pipelined operations. For example, some embodiments may be augmented with the peer engines 748 that are peer compute engines. The GPU RoT 704, first target environment 706, second target environment 708 and third target environment 712 may further attest and verify integrity of each other as will be explained below prior to executing pipelined operations as well.

[0292] Some embodiments further include peer engines 478. The peer engines 278 may be separated from the GPU 736, but be in communication with the GPU 736 (e.g., on a same SoC or computing device). One such example of the peer engines 748 may include a Smart network interface card (NIC) that is a NIC that offloads processing tasks (e.g., encryption/decryption, firewall, etc.) that the system a central processing unit may normally handle. Other examples of the peer engines 748 include central processing units, remote nodes, microprocessors, trust domain extensions, and/or Software Guard Extensions. Thus, the peer engines 748 may further participate towards both verifying and attesting RoT context for a better confidential and secure computing capability.

[0293] The GPU 736 may be partitioned into finer granularity "lanes" (e.g., including memory, core/core slice, cache and storage resources). Per slice attestation and identity keys may be derived and/or rolled-back according to an elastic compute paradigm. For example, Device Identifier Composition Engine (DICE) layering may include fan-out for seeding key derivation functions. Such attestation and identity key may aid in identifying devices that may be trusted (e.g., uncompromised by physical attacks or otherwise).

[0294] As illustrated in FIG. 8A, a GPU RoT 704 (e.g., a hardware device) may include a RoT attestation environment 704a. The GPU RoT 704 may be resistant to physical attacks. Thus, security may be premised on the GPU RoT 704 having the ability to attest and verify a first target environment 706. The RoT attestation environment 704a may collect claims 716 from the first target environment 706. For example, the RoT attestation environment 704a may measure software and/or physical characteristics of the first target environment 706.

[0295] The claims of the first target environment 706 may be attestable identity information, such as hardware and/or software measurements of the first target environment 706. For example, the GPU RoT 704 may identify, measure and/or receive hardware and software status of the first target environment 706 and report the measurements as first evidence (e.g., a hash function of the measurements that is signed with a private key and/or a certificate). Doing so may enable remote attestation of supported system events, (e.g., a software chain of trust), but may also support the management of platform-specific configuration and status events such as, for example, platform capabilities, execution modes, and platform security policies.

[0296] The GPU RoT 704 may further send a key seed and first evidence 718 (e.g., collected claims and/or a signed certificate) of the first target environment 706 to the first target environment 706. The first evidence may include a certificate that is signed by the GPU RoT 704 with a RoT identity key (e.g., private encryption key) to attest to the security of the first target environment 706. That is, the RoT attestation environment 704a may attest to the hardware and/or software for first target environment 706. The key seed may be based on various values associated with the first target environment 706 (e.g., hash values of software and/or hardware measurements) and an input entropy (e.g., unique device secret). For example, the input entropy may be modified based on the hash values. In some embodiments, the key seed may be randomized based on various inputs.

[0297] In some embodiments, the RoT attestation environment 704a further determines the RoT identity key for the RoT attestation environment 704a based on the input entropy but not the various values of the first target environment 706 to ensure that the RoT identity key of the RoT attestation environment 704a is not duplicated on the first target environment 706. The input entropy may be implemented by a physically unclonable fuse that is physically tamper resistant such that if a third-party probes or attempts to read the value in the physically unclonable fuse, the value of the physically unclonable fuse is erased. The GPU RoT 704 would then cease to operate correctly (e.g., fail to properly authenticate and generate keys for signing) to alert tenants that the GPU RoT 704 is compromised.

[0298] The first target environment 706 (e.g., a firmware and/or bring-up software) may further include a first attestation environment 706a (e.g., a RoT hardware). The first attestation environment 706a may generate a key (e.g., a first identity key that is an encryption key) based on the key seed received from the GPU RoT 704, and collect claims 720 of the second target environment 708 (e.g., a GPU resource manager). For example, the first attestation environment 706a may read a memory of the second target environment 708 to collect the claim 720.

[0299] The claims may be hardware and/or software measurements of the second target environment 708. The first attestation environment 706a may identify (e.g., read a memory) the claims of the second target environment 708, verify the claims and generate a key seed. The first attestation environment 706a may generate a certificate attesting to the claims of the second target environment 708 that is signed with the first identity key. The first attestation environment 706a may generate second evidence (e.g., a hash function of the measurements that is signed with first identity key and/or a certificate). Second evidence may include the certificate generated by the first attestation environment 706a and/or a hash of the claims of the second target environment 708. The first attestation environment 706a may send the key seed and the first and second evidence 722 to the second target environment 708. Thus, the second evidence may include a hash of the measurements associated with the second target environment 708 and/or a certificate that is signed by the first attestation environment 704a with the first identity key to attest to the security of the second target environment 708.

[0300] The first attestation environment 706a may generate the key seed based on an entropy source (e.g., a composite device identifier (CDI) function that corresponds to a set of data used to identify the software running on a system that was used to generate this data) and various values associated with the second target environment 708 (e.g., hash values of software and/or hardware measurements) to randomize the key seed. For example, example, a cryptographic digest of the associated software/firmware may be used as a class identifier (e.g., CDI) of the targeted environment. In some embodiments, the composite device identifier function may generate a value based on the key seed from the RoT attestation environment 704a, and the value may be used to generate the key seed for the second target environment 708 along with the various values associated with the second target environment 708. The key seed may be the output of a one-way function (e.g., hash) that combines a digest of the firmware, firmware initialization values, an entropy source (e.g., CDI), and key disambiguation values. The seed may be used to generate asymmetric or symmetric keys. Thus, the key seed from the RoT attestation environment 704a may be used to generate the first identity key for the first attestation environment 706a and the key seed for a second attestation environment 708a. The key seed generated by the first attestation environment 706a may be unique and different from the key seed generated by the RoT attestation environment 704a to ensure that RoT and first identity keys are unclonable.

[0301] The second target environment 708 may include the second attestation environment 708b (e.g., a RoT hardware). The second attestation environment 708a may collect claims (e.g., hardware and/or software measurements) of a third target environment 712 (e.g., a GPU compute engine manager). The second attestation environment 708b may generate a key (e.g., a second identity key that is an encryption key) based on the key seed received from the first attestation environment 706a. In some embodiments, to collect claims 724, the second attestation environment 708a may read a memory of the third target environment 712.

[0302] The claims may be hardware and/or software measurements of the third target environment 712. The second attestation environment 708a may receive (e.g., read a memory) the claims of the third target environment 712, verify the claims and generate a key seed. The second attestation environment 708a may generate a certificate attesting to the claims of the third target environment that is signed with the second identity key. The second attestation environment 708a may generate third evidence (e.g., a hash function of the measurements that is signed with the second identity key and/or a certificate). The third evidence may include the certificate generated by the second attestation environment 708a and/or a hash of the claims of the third target environment 712. The second attestation environment 708a may send the key seed and the first, second and third evidence 726 to the third target environment 708. Thus, the third evidence may include a hash of the measurements associated with the third target environment 712 and/or a certificate that is signed by second attestation environment 708a with second identity key to attest to the security of the third target environment 712.

[0303] The second attestation environment 708a may generate the key seed based on an entropy source (e.g., a composite device identifier function that corresponds to a set of data used to identify the software running on a system that was used to generate this data) and various values associated with the third target environment 712 (e.g., hash values of software and/or hardware measurements) to randomize the key seed. In some embodiments, the composite device identifier function may generate a value (also known as the CDI) based on the key seed from the first attestation environment 706a, and the value may be used to generate the key seed for the third target environment 712 in conjunction with the various values of the third target environment 712. Thus, the key seed from the first attestation environment 706a may be used to generate the second identity key for the second attestation environment 708a, and the key seed for a third attestation environment 712b. The key seed generated by the second attestation environment 708a may be unique and different from the key seed generated by the RoT attestation environment 704a and the first attestation environment 706a to ensure that the RoT, first and second identity keys are unclonable.

[0304] Thus, some embodiments may implement a cascaded key seed generation system. For example, the first target environment 706 creates the CDI for the 2nd target environment 708 and so forth--in a layering model. Alternatively, there may be a composite model where the first target environment 706 creates CDI values for a number of different environment (e.g., the 2nd through n-th environments). The first target environment 706 may add disambiguation values to the CDI for each different environment, such as hashing the position (2|3| . . . |n) with a CDI that is otherwise common across all composted environments. Additionally, the first target environment 706 may manage multiple UDS (unique device secret) values for each composited environment for added security.

[0305] The third target environment 712 may further include the third attestation environment 712b (e.g., a RoT hardware). The third attestation environment 712b may generate a key (e.g., a third identity key that is an encryption key) based on the key seed received from the second attestation environment 708a. The third attestation environment 712b may accumulate the first, second and third evidence, and evidence generated for the first-N compute engines 702a-702n. The third attestation environment 712b may provide the accumulated certificates to requesting parties, such as the tenant 714, to verify the security of the GPU 736.

[0306] The third target environment 712 may include an attestation and key manager 712a that sends different key seeds to the first-N compute engines 702a-702n. For example, the attestation and key manager 712a may send a first key seed 728 to the first compute engine 702a. The first compute engine 702a may generate a unique key (e.g., identity key) for communication with the tenant 714 based on the first key seed. The attestation and key manager 712a may collect first compute engine 702a claims 734 (e.g., software and/or hardware measurements) as already described to generate evidence for the first compute engine 702a. In some embodiments, the attestation and key manager 712a may read memory of the first compute engine 702a to collect the claims.

[0307] For example, the evidence for the first compute engine 702a may be a hash function of the measurements that is signed with the third identity key and/or a certificate generated by the attestation and key manager 712a. The evidence may include the certificate generated by the attestation and key manager 712a and/or a hash of the claims of the first compute engine 702a. Thus, the evidence may include a hash of the measurements associated with the first compute engine 702a and/or a certificate that is signed by the attestation and key manager 712a with the third identity key to attest to the security of the first compute engine 702a.

[0308] The attestation and key manager 712a may send a second key seed 730 to the second compute engine 702b. The second compute engine 702b may generate a unique key (e.g., an identity key) for communication with the tenant 714 based on the second key seed. The attestation and key manager 712a may collect second compute engine 702b claims 736 (e.g., software and/or hardware measurements) to generate evidence and similarly to as described herein.

[0309] Further, the attestation and key manager 712a may continue similar to the above with each compute engine 702a-702n until the N compute engine is reached. The attestation and key manager 712a may send an N key seed 732 to the N compute engine 702n. The N compute engine 702n may generate a unique key for communication with the tenant 714 based on the N key seed. The attestation and key manager 712a may collect N compute engine 702n claims 738 (e.g., software and/or hardware measurements) to generate evidence and similarly to as described herein.

[0310] Each of the first-N key seeds is unique (having been augmented with a disambiguation value for each tenant environment instance), establishing unique identity keys for the environment. Additional keys may be derived from the specific environment of the GPU RoT 704, first target environment 706, second target environment 708 and third target environment 712 (e.g., to support communication). For example, each of the first-N compute engines 702a-702n may have generated a different encryption key (e.g., a key used for encryption and for identity verification) for communication with the tenant 714. Thus, if a compromised compute engine of the first-N compute engines 702a-702n is compromised, the tenant 714 may bypass interactions with the compromised compute engine. Furthermore, the compromised compute engine may be unable to spoof of mimic other compute engines of the first-N compute engines 702a-702n since the compromised compute engine cannot recreate the unique encryption keys used by the other compute engines. That is, the compromised compute engine cannot encrypt and/or sign messages according to another compute engines unique encryption key to prevent the one compute engine from mimicking messages from the another encryption engine without also knowing the unique device secret (UDS) and other disambiguation values. Furthermore, the one compute engine cannot decrypt messages intended for another compute engine since the unique encryption key (which is used for decryption) is unknown to the one compute engine)

[0311] The attestation and key manager 712a may generate evidence (e.g., hashes of the software and hardware measurements) and sign the evidence with a certificate. The attestation and key manager 712a may send the evidence 724 to the third attestation environment 712b (e.g., a RoT hardware). The tenant 714 may verify security of the GPU 736 by reviewing the signed certificates and evidence. If a compute engine of the first-N compute engine 702N is identified as being compromised the compute engine may be locked out of workloads by the tenant 714 by avoiding using an encryption key associated with the compromised compute engine. The tenant 714 may then execute a workload on the secure GPU 736.

[0312] Thus, some embodiments implement the plurality of compute engines 706 bootstrap with attestable identities and key generation seeds at a tenant-specific granularity (e.g., each tenant can specific different key seeds). Each compute engine of the compute engines 706 may derive additional keys (e.g., per-tenant slice if a resource manager requires finer grained resource partitioning). As resources are reclaimed and/or reallocated, different slice/engine contexts may elastically disappear or reappear. Attestation and key seed contexts may be re-created as needed to support elasticity. Furthermore, in some embodiments the GPU 736, including the compute engines 702, may conduct an attestation process as described above for each tenant that begins to execute on the GPU 736.

[0313] As shown in FIG. 8B, an example device architecture 758 (e.g., GPU) layering applied to GPU resources demonstrates how a one-way function (OWF) may be used to derive a key seed for a next layer.

[0314] RoT 750 has a first function 750b (e.g., a OWF) that accepts as input entropy a UDS 750c (unique device secret) and one or more values from zero layer 752 (e.g., a GPU firmware boot). The values may be identified by the first Trusted Component Identity (TCI) 750a and may be context information. The values may be hashed.

[0315] The output of the first function 750b may be a key seed for the zero layer 752. The output of the first function 750b may be provided to a first composite device identifier 752c that receives the output (e.g., a key seed), and may modify the output A first TCI 752a may identify values of a first layer 754 (e.g., GPU resource manager) and may be context information. The values may be hashed. A first function 752b may receive the outputs of the first CDI 752c and the first TCI 752a and provide a key seed to the first layer 754. The first layer 754-N layer 756 (e.g., GPU compute engines/lanes) may operate similarly to as described above and in conjunction with other layers (not illustrated).

[0316] In some embodiments, each of RoT 750 and the zero-N layers 752-756 may persist rather than being torn down or deactivated during various boot processes, power ups or context switches. Additionally, the RoT 750 and the zero-N layers 752-756 may be isolated from each other to the point where data (e.g., encryption keys and seeds) are securely maintained and not comprised by unauthorized elements of the RoT 750 and the zero-N layers 752-756.

[0317] Turning now to FIG. 8C, process 760 illustrates securing trust through interactions between a tenant 762 and a compute engine E1 764. In some embodiments, compute engines and/or lanes, such as the compute engine E1 764, attest to hosting environment security 766 properties and may also supply an engine-specific or lane-specific key wrapping key. The wrapping may be generated based on an encryption seed derived from a Unique Device Secret (UDS) or a Compound Device Identifier (CDI) that contains entropy derived from a UDS value and data from another layer. The wrapping key may be an identity encryption key as described herein. The compute engine E1 764 and any other compute engine in the lane associated with the tenant T1 762 may provision a tenant-specific key-encryption-key (KEK) 768. In some embodiments, the compute engine E1 764 may also verify attestation of tenant T1 762 to ensure that the compute engine E1 764 to enforce security and reduce tampering of the compute engine E1 764 by malicious actors. Tenant T1 762 generates content keys, context encryption keys, encrypted content and/or context, then wraps at least the content encryption key with the KEK for use by the compute engine E1 764.

[0318] The compute engine E1 764 may provision the tenant T1 data and context 772. In some embodiments, the compute engine E1 764 may decrypt the encrypted content encryption key with the KEK to decipher the data provided by the tenant T1 762.

[0319] Thus, some computer engine environments are elastically formed from a hardware RoT (e.g., UDS/PUFs) where each CE may be specialized according to hosting requirements, AI model provisioning, etc., or may be clones (but distinguishable by instance).

[0320] Compute engines, including the compute engine E1 764, may have compute engine-specific identities and keys that attest to security properties and other capabilities to tenants, such as tenant T1 762, or other peers interacting with GPU. In such a case, the peer verifies that the compute engine E1 764 and/or lane environment is suitable for the tenant T1 762 workload/application. The compute engine E1 764 may request tenant T1 762 to attest to ensure tenant identity and context meet minimum security requirements and to establish tenant endpoint context. Additionally, the compute engine E1 764 provisions the KEK (e.g., RSA public key) for the tenant T1 762 to wrap its context and/or content. The KEK may be provisioned (e.g., transmitted to) the compute engine E1 764 after the tenant T1 762 verifies the evidence that the compute engine E1 764 is secure (e.g., reviews attestation data).

[0321] The tenant T1 762 provisions tenant data, AI models and workload execution code/context securely using tenant-specific encryption key(s). The compute engine E1 764 unwraps the key and decrypts the context/content to perform the application/workload.

[0322] FIG. 8D illustrates a method 780 to securely attest to elements of a graphics processor (e.g., GPU). The method 780 may generally be implemented in conjunction with any of the embodiments described herein, such as, for example the environment 700 (FIG. 8A), example device 752 (FIG. 8B) and process 760 (FIG. 8C) already discussed. More particularly, the method 1010 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof.

[0323] Illustrated processing block 782 transmits, with a first target environment of a plurality of target environments, first key seeds to compute engines of a graphics processor. Illustrated processing block 784 collects claims, with the first target environment, from the compute engines to generate evidence. Illustrated processing block 786 generates, with the compute engines, unique identity keys for each of the compute engines based on the first key seeds. The plurality of target environments may be part of the graphics processor.

[0324] In some embodiments, method 780 further includes transmitting, with the plurality of target environments, second key seeds to each other. In some embodiments, method 780 further includes generating, with the plurality of target environments, unique identity keys based on the second key seeds. In some embodiments, method 780 further includes collect, with the plurality of target environments, claims of the plurality of target environments, and generating evidence for attestation based on the claims of the plurality of target environments. In some embodiments, method 780 generates, with a RoT hardware of the graphics processor, a key seed for a second target environment of the plurality of target environments. In some embodiments, method 780 further collects claims, with the RoT hardware, from the second target environment, and generating, with the RoT hardware, evidence based on the claims collected from the second target environment.

[0325] Confidential Guest VM Display (FIGS. 9A-9D)

[0326] Embodiments as described herein relate to isolation and preservation of confidential data between different systems on a security enhanced computing architecture 900. For example, in FIG. 9A, the computing architecture 900 includes a trusted execution environment 932, a virtual machine manager (VMM) 1008 (e.g., a hypervisor), a host operating system (OS) 926 and confidential guest OS 902.

[0327] The VMM 1008 has the ability to create domains, such as the confidential guest OS 902 or other virtual machines, that are sufficiently isolated, permitting computations that are confidential with respect to all other domains on the architecture. Specifically, data of the confidential guest OS 902 (e.g., a virtual machine) is isolated from the host OS 926 (e.g., an `open` domain in client platforms). Such data isolation may be readily enforced as long as the data remains under the control of the confidential guest OS 902. In some cases, however, the data may need to be transferred to the host OS 926 to be under the control of the host OS 926.

[0328] For example, the host OS 926 may control an underlying hardware of the computing architecture 900, such as display 948 and/or a GPU (e.g., graphics processor). For example, the host OS 926 may include software that interacts with underlying hardware of the architecture 900. The host OS 926 may operate as a medium between the confidential guest OS 902 and the hardware to execute actions (e.g., input/output actions) on behalf of the confidential guest OS 902. In doing so, malicious actors on the host OS 926 may attempt to access the data of the confidential guest OS 902. In order to mitigate such unauthorized accesses, some embodiments as described herein encrypt the data of the confidential guest OS 902 to prevent software of the host OS 926 from accessing and decrypting the data. Secured, hardware elements (e.g., a GPU) that are less prone to software-based vulnerabilities may be able to decrypt and manipulate the data, while preventing malicious software from accessing the data. Thus, some embodiments may preserve data confidentiality of the confidential guest OS 902 across software boundaries by only permitting a limited number of hardware-based elements to access and decrypt the data.

[0329] In some examples, the confidential guest OS 902 and the host OS 926 may communicate through a proxy application 928 in order to present visual information (e.g., output, dialog boxes, etc.) to a user of the architecture. In order to do so, the output from the confidential guest OS 902 may traverse through the host OS 926. Doing so may include sharing output buffers with the host OS 926. While some implementations may attempt to make data un-scrapable to prevent other processes in the Host OS 926 from copying the contents, such security measures detrimentally may rely on an uncompromised software execution on the host OS 926 and/or the VMM 1008.

[0330] Thus, some embodiments may augment security through an enhanced communication and encryption process that leverages secure hardware-based elements to handle unencrypted data and encrypt the data. For example, some embodiments may leverage graphics hardware to implement a robust and well-developed mechanism for the handling and display of digital video content while preventing interception and/or inspection of data of the content software of the host OS 926 and/or other malicious software. For example, the host OS 926 may include a Protected Audio/Visual Path (PAVP) session 930 that may securely protect encrypted content while in-rest in buffers. The PAVP session 930 may employ inline encryption engines to ensure that protected data is encrypted whenever it is at rest in system memory and/or in transit within the system busses. Data may be encrypted with a first encryption key (e.g., "session" key), passed through various portions of the host OS 926, decrypted and then encrypted again with a second encryption key (e.g., a display key) different from the first encryption key.

[0331] The confidential guest OS 902 may be a producer of confidential information to be output. Architecture 900 may include a software-based security implementation in which the confidential guest OS 902 may bypass leveraging a GPU (e.g., graphics hardware) to encrypt data (e.g., may not have direct access to a GPU). A hardware-based composition engine 934 (e.g., GPU) may however be able to at least decrypt and composite the data as will be explained below.

[0332] The confidential guest OS 902 may be considered isolated from other VMs (not illustrated) and the Host OS 926. The other VMs may interact with the VMM 1008 and/or host OS 926. Thus, the confidential guest OS 902 may seek to enforce data isolation principles (e.g., prevent software access) from the other VMs, host OS 926 and/or VMM 1008.

[0333] A guest certificate 922 may be pre-provisioned, for example when the architecture 900 is manufactured, installed and/or initialized, into the confidential guest OS 902 and the Trusted Execution Environment (TEE) 932 (e.g., a secure area of a main processor, hardware security module (HSM), secure execution environment, Dynamic Application Loader (DAL), trust domain extensions (TDX), etc.). The confidential guest OS 902 and the TEE 932 establish a secure session 938 by proving authenticity to each other using the guest certificate 922. The design of the system is agnostic of the choice of secure session protocol. Once the secure session is established, the confidential guest OS 902 and TEE 932 generates a session key (e.g., an encryption key such as a symmetric session key) and TEE 932 transmits the key to a hardware element, such as a GPU. Once established, the session key is used in the encryption engine 910. The session key will be used for encryption as explained below. The confidential guest OS 902 may thus utilize a content encryption key (e.g., the session key) provided by a confidential application and/or vendor.

[0334] The confidential application 904 of the confidential guest OS 902 may generate data 1040. A render engine 906 may generate render data 918 (e.g., image data, and/or related to software rendering and/or rasterization) based on the generated data of the confidential application 904. The confidential buffer 908 may not be encrypted at this point so that the render data is unencrypted.

[0335] The confidential application 904 may issue an instruction for encryption according to the session key 914 to the encryption engine 910. As noted, the session key is stored in the encryption engine 910. In response to receiving the issued instruction, the encryption engine 910 encrypts confidential buffer 908, 920, and more particularly encrypts the render data stored in the confidential buffer 908 according to the session key (i.e., with a Widevine encryption scheme as one possible embodiment), and stores the data in the encrypted confidential buffer 912. It is worthwhile to note that the confidential buffer 908 and the encrypted confidential buffer 912 may be the same buffer in some embodiments, with the distinction being that the encrypted confidential buffer 912 stores the encrypted render data while the confidential buffer 908 stores the unencrypted render data.

[0336] The confidential guest operating system 902 may then pass the message 924 to the proxy application 928. The message may include the encrypted confidential buffer 912 (or a pointer thereto) and a request to render the encrypted render data. The proxy application 928 passes the message 916 to the PAVP session 930. The PAVP session 930 passes the message 936 to the hardware-based composition engine 934.

[0337] The hardware-based composition engine 934 may be a hardware element (e.g., host processor, GPU, accelerator, vision processing unit, etc.) to enhance security. It is to be noted that prior to this point, the confidential guest OS 902 has only provided encrypted render data to the host OS 926 to reduce malicious actors from accessing the data. The hardware-based composition engine 934 may be under the control/receive commands from the host OS 926, but software of the host OS 926 does not have direct access to the data locations of the hardware-based composition engine 934, thus making tampering and/or reading attacks of the render data more difficult. Thus, the hardware-based composition engine 934 may be less prone to malicious attacks due to the hardware-based composition engine 934 being implemented in a hardware structure. In some embodiments, the composition engine 934 may be a GPU.

[0338] An application 940 of the host OS 926 may pass application data 942 to be rendered to the hardware-based composition engine 934. The application data may be displayed in conjunction (e.g., simultaneously) with the confidential data. In order to do process the confidential data and non-confidential application data together into a final image for display, the hardware-based composition engine 934 may have received a copy of the session key (or another decryption key) from the TEE 932 to decrypt the encrypted render data into clear text. The composition engine 934 may composite the unencrypted render data and the unencrypted application data together to generate composited render and application data. Thereafter, the hardware-based composition engine 934 may encrypt the composited render data and the application data together according to a display key (e.g., a second key), that is different from the session key (e.g., a first key), to generate encrypted composited data. The hardware-based composition engine 934 may then store the encrypted composited data in an encrypted display buffer 944, which may be external to the hardware-based composition engine 934. Thus, in some embodiments, the hardware-based composition engine 934 may store only encrypted data (e.g., encrypted versions of the render data) outside the hardware-based composition engine 934, while all unencrypted data operations execute internally within the hardware-based composition engine 934.

[0339] When the encrypted composited data is ready for display, the hardware-based composition engine 934 (e.g., a GPU) may decrypt and display data 944. For example, the hardware-based composition engine decrypt the encrypted composited data and may present the decrypted composited data on display 948.

[0340] Display 948 may show application graphical user interface (GUI), which is based on the application data of the host operating system 926, and the guest GUI, which is based on the render data generated by the confidential guest operating system 902. Thus, the plaintext of the render data is not directly accessible outside of the GPU hardware pipeline to remain in a protected state from malicious software that may be on the host OS 926.

[0341] As such, some embodiments, may facilitate a security enhanced communication process. Further, some embodiments may leverage hardware elements to enhance security.

[0342] It is worthwhile to note that some embodiments are agnostic to the specific implementation of the TEE 932 as long as the TEE 932 maintains confidentiality of data. The plaintext of the render data will only be available to the confidential guest OS 902 and while protected in the GPU hardware pipeline. Some embodiments may be modified to apply to encrypted content that may be streamed across network connections from a remote source. For example, the remote source may execute a process similar to the confidential guest OS 902 while a display device may execute a process similar to the host OS 926.

[0343] Thus, the Host OS 926 may be able to handle confidential VM output (e.g., the render data) in a flexible manner to display the render data in context of other visual output from the host OS 926 while robustly protecting the integrity of the data from potential hostile code in the Host OS 926.

[0344] In some embodiments, the content of Guest OS 902 (e.g., a virtual machine) is isolated in output buffers from the host OS 926 visibility (e.g., prevent screen scraping of content). In some embodiments, the confidential guest OS 902 may utilize software and/or hardware rendering (i.e., through Peripheral Component Interconnect (PCI) device assignment, single root input/output virtualization (SR-IOV), or CPU-based render such as Windows Advanced Rasterization Platform (WARP)) and subsequently encrypt the buffers to prevent the Host OS 926 from having access to the screen or render data itself. By encrypting the confidential buffer 908 to generate encrypted confidential buffer 912, the confidential guest operating system 902 protects the confidentiality of the render data.

[0345] In some embodiments, the VMM 1008 may operate similarly to the host OS 926 to interact with hardware on behalf of the confidential guest OS 902. In such embodiments, the confidential guest OS 902 may encrypt and transfer data to the VMM 1008 similarly to as described herein, and a GPU associated with the VMM 1008 may decrypt, composite, encrypt the composited data, and decrypt the composited data for display.

[0346] FIG. 9B illustrates a hardware accelerated confidential display computing architecture 960. In this particular example, a confidential guest OS 962 may have access to hardware element (e.g., a GPU such as a graphics processor, and/or a processing unit) to encrypt data, rather than relying on software mechanisms to do so. For example, in some embodiments the confidential guest OS 962 has access to the services of the GPU (e.g., via SR-IOV). As such, the confidential guest OS 962 may employ the services of the GPU to render and/or rasterize and execute encryptions.

[0347] In this particular example, a confidential application 964 may generate data 966. A hardware render engine 968 (e.g., a component of the GPU) may generate render data 970 based on the received data, and store the render data into confidential buffer 972. The confidential guest OS 962 causes (e.g., passes an instruction to command an encryption operation) the GPU encryption engine 974 (e.g., a hardware element of the GPU) to encrypt the confidential buffer 972, 976 (e.g., execute a PAVP encryption process) to generate the encrypted confidential buffer 978. The encrypted confidential buffer 978 may contain the render data in an encrypted form. The GPU may encrypt the render data according to a first encryption key. The GPU may be responsible for decryption of the render data at a later time, and thus maintain the first encryption key in a secure storage location on the GPU (e.g., a register) to bypass storage of the first encryption key outside the GPU.

[0348] Similar to as above, the confidential buffer 972 and encrypted confidential buffer 978 may be the same buffer, but the confidential buffer 972 may store unencrypted render data while the encrypted confidential buffer 978 may store encrypted render data. For example, the confidential guest OS 962 may access the GPU via a PAVP session to encrypt confidential buffer 972 and generate encrypted confidential buffer 978. In doing so, the render data is encrypted and stored in the encrypted confidential buffer 978 to protect the render data before sharing the render data with the Host OS 982. The confidential guest operating system 962 may use a privileged application programming interface (API) to communicate directly with the GPU. The API may not route through the Host OS 982 or VMM 1006 to be contained and controlled by the confidential guest OS 962, and to allocate encrypted buffer space not controlled by any other guest or VM.

[0349] The confidential guest OS 962 may pass message 980 to the proxy application 984. The message may include the encrypted confidential buffer 978 and/or a location of the encrypted confidential buffer 978 (e.g., a pointer). The message may further include an instruction to display the render data. The proxy application 984 passes the message 986 to a PAVP session 990. The PAVP session 990 passes the message 988 to composition engine 996.

[0350] An application 992 of the host OS 982 may pass application data 994 (e.g., data to be displayed) to a hardware-based composition engine 996. The application data may be displayed in conjunction (e.g., simultaneously) with the non-confidential data. In order to do so, the hardware-based composition engine 996 may decrypt the encrypted render data into clear text. The hardware-based composition engine 996 may be part of the GPU, and thus already have access to the first encryption key to execute decryption as discussed above. The hardware-based composition engine 996 may composite the unencrypted render data and the unencrypted application data together. Thereafter, the hardware-based composition engine 996 may encrypt the composited render data and the application data together according to a second encryption key, that is different from the first encryption key, to generate encrypted data. The composition engine 996 may then store the encrypted data 998 (e.g., a ciphertext of the render and application data) in an encrypted display buffer 1000.

[0351] When the encrypted data is ready for display, the hardware-based composition engine 996 (e.g., GPU) may decrypt the data and display the data 1002 on display 1004. Display 1004 may show application graphical user interface (GUI), which is based on the application data of the host operating system 982, and the guest GUI, which is based on the render data generated by the confidential guest OS 962. Thus, the plaintext of the render data is not directly accessible outside of the graphics processor hardware pipeline to remain in a protected state from malicious software that may be on the host OS 982.

[0352] As such, some embodiments, may facilitate a security enhanced communication process. Further, some embodiments may leverage hardware elements to enhance security.

[0353] The plaintext of the render data will only be available to the confidential guest OS 962 and while protected in the graphics processor hardware pipeline. Some embodiments may be modified to apply to encrypted content that may be streamed across network connections from a remote source. For example, the remote source may execute a process similar to the confidential guest OS 962 while a display device may execute a process similar to the host OS 982.

[0354] In some embodiments, the VMM 1006 may operate similarly to the host OS 962 to interact with hardware on behalf of the confidential guest OS 962. In such embodiments, the confidential guest OS 962 may encrypt and transfer data to the VMM 1006 similarly to as described herein, and a graphics processor associated with the VMM 1006 may decrypt, composite, encrypt the composited data, and decrypt the composited data for display.

[0355] FIG. 9C illustrates a method 1010 to securely transfer data from a guest OS (e.g., a virtual machine) that is to be rendered to a host OS for display. The method 690 may generally be implemented in conjunction with any of the embodiments described herein, such as, for example the architectures 900 and 960 (FIGS. 9A and 9B) already discussed. More particularly, the method 1010 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof.

[0356] Illustrated processing block 1012 generates guest data by a guest OS (e.g., a virtual machine). Illustrated processing block 1014 encrypts the guest data on the guest OS side with a first encryption key (e.g., either on a GPU or with a session key from a TTE described above). Illustrated processing block 1016 transfers encrypted data via a PAVP of a host OS side. Illustrated processing block 1018 decrypts, with a GPU, the guest data with the first encryption key to generate clear text that may be interleaved and/or composite with other data. Illustrated processing block 1020 combines (e.g., combines and/or interleaves), with the GPU, the guest data with host data (e.g., application data) generated on the host side. Illustrated processing block 1022 encrypts the combined guest and host data with a second encryption key. The second encryption key may be different from the first encryption key. Illustrated processing block 1024 stores the encrypted combined guest and host data to a display buffer. In response to a display request, illustrated processing block 1026 decrypts the encrypted combined guest and host data with the second key to generate clear text that may be in a displayable format. Illustrated processing block 1028 displays the decrypted combined guest and host data.

[0357] Thus, some embodiments may permit only the guest OS side and the GPU to view clear test data. Doing so may enhance security and prevent access (e.g., scraping) by malicious actors.

[0358] FIG. 9D illustrates a method 1030 to securely handle data. The method 1030 may generally be implemented in conjunction with any of the embodiments described herein, such as, for example the architectures 900 and 960 (FIGS. 9A and 9B) and the method 1010 already discussed. More particularly, the method 1010 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof.

[0359] Illustrated processing block 1032 generates, with a virtual machine, confidential data to be rendered. Illustrated processing block 1034 encrypts, with one or more of a graphics processor or the virtual machine, the confidential data according to a first encryption key to generate encrypted confidential data. Illustrated processing block 1036 stores the encrypted confidential data in a first buffer. Illustrated processing block 1038 decrypts, with the graphics processor, the encrypted confidential data to generate decrypted confidential information.

[0360] In some embodiments, the method 1030 conducts a verification process with a trusted execution environment to prove an identity of the virtual machine, and receives, with the virtual machine, a session key from the trusted execution environment, wherein the session key is to be the first encryption key, and receive, with the graphics processor, the session key from the trusted execution environment. In such embodiments, the session key is to be a private symmetric digital rights management (DRM) session key.

[0361] In some embodiments, the graphics processor generates the first key. In some embodiments, the method 1030 includes compositing the decrypted confidential data with application data to generate composited confidential and application data. The application data is associated with one or more application executing on a host operating system. In some embodiments, the method 1030 encrypts the composited confidential and application data according to a second key to generate encrypted composited confidential and application data, wherein the second key is to be different from the first key, and stores the encrypted composited confidential and application data in a second buffer that is to be different than the first buffer. In some embodiments, in response to an identification that the encrypted composited confidential and application data will be displayed, method 1030 decrypts the encrypted composited confidential and application data according to the second key.

[0362] Support Paging of Encrypted Buffers in the GPU (FIGS. 9E-9G)

[0363] Some embodiments may enhance performance by permitting paging operations. For example, a memory manager may page out data from a low latency storage (e.g., memory) to a high latency storage (e.g., mass storage device). A virtual table may maintain a list of the virtual addresses and corresponding locations in the physical memory (e.g., map of virtual addresses to physical locations) to enable the paging operations.

[0364] Some encryption operations may be reliant on physical memory locations (e.g., the encryption may be adjusted based on the physical memory location). For example, an encryption mode may be an Advanced Encryption Standard (AES) cipher (e.g., XEX-based tweaked-codebook mode with ciphertext stealing (XTS)) based on host-physical address (HPA). The AES-XTS-HPA domain encryption may be able to enhance some operations by providing a layer of security. The pages that are encrypted according to the AES-XTS-HPA may need to be "pinned" for a lifetime of the pages so that the pages cannot be paged out. That is, AES-XTS-HPA is reliant on the physical address associated with data for encryption and decryption, and thus data must remain in the same physical address after encryption otherwise the data may not be decrypted properly. Operating systems lack the ability to guarantee that data, which is paged out from a first physical address, will be assuredly paged back into the first physical address, thus resulting in changes to memory locations, particularly when a CPU is not within the TCB (e.g., software running on the CPU such as OS/VMM may not be secure; the CPU hardware is in the TCB and a trusted application offloading computation to GPU running on the CPU is also in the TCB). Thus, while security may be enhanced, memory may be consumed by AES-XTS-HPA pages that are unable to be paged out resulting in inefficient memory usage, higher latency operations, particularly with memory intensive operations, and resulting in fairness inequities among different applications.

[0365] As an example, some GPU allocated buffers store data that is in an encrypted form using the HPA for a tweak in local memory and/or system memory to address potential threats. In some examples, a central processing unit (CPU) may not be within a trust control boundary. Therefore, the GPU may encrypt the buffers for security and to reduce unauthorized accesses by other elements, such as the CPU. Such encrypted buffers may have to be pinned for a lifetime with the AES-XTS-HPA encryption. As noted, doing so, results in paging operations on these buffers being unsupported. For example, if the encrypted data is paged out from a first memory location and then paged in again into a second memory location, the GPU may be unable to decrypt the data since the data has been moved. That is, the data is encrypted according to a tweak based on the first memory location. When the data is retrieved from the second memory location, the data is decrypted with a tweak based on the second memory location, which results in an unusable output since the data was not encrypted according to the second memory location.

[0366] To support paging operations, some embodiments use a GPU direct memory access (DMA) engine to perform the paging operation using a specific paging key and convert the buffer from a first encryption scheme that is based on HPA tweaks (e.g., AES-XTS-HPA) based domain to an HPA-agnostic encryption domain (e.g., authenticated encryption with associated data (AEAD) mode of encryption/integrity and/or GCM, CCM, Chacha-Poly).

[0367] An AEAD may be based on an authenticated encryption that allows a recipient to check the integrity of both the encrypted and unencrypted information in a message. For example, an AEAD scheme may bind associated data (AD) to the ciphertext and to the context. Doing so, may detect manipulation of data into different contexts.

[0368] Turning now to FIG. 9E, an encryption conversion scheme with a paging process 1050 is illustrated. The first buffer 1058 may be a local storage of the GPU. The GPU may then map the first encrypted data 1052 to an HPA agnostic scheme, and generate identification data 1078, 1060. The HPA agnostic scheme may be an AEAD encryption scheme. Thus, the data may be converted from the host physical address based encryption domain 1054 to the host physical address agnostic encryption domain 1062 to generate second encrypted data 1064 and the identification data 1078.

[0369] As noted, identification data 1078 may be generated. The identification data 1078 may be a page (e.g., a message authentication code (MAC) page) in protected memory 1056 that includes specific data (e.g., a checksum, and/or MAC value) associated with the second encrypted data 1064 to verify the second encrypted data 1064. The identification data 1078 may be stored in a protected memory 1056 (e.g., a stolen memory) that is inaccessible by other components, such as the CPU or host processor so that the other elements cannot read the identification data 1078 from the protected memory 1056. In some embodiments, a value of a global counter may be stored in association with the root MAC page in the protected memory 1056. The global counter may be incremented every time a paging operation (e.g., page out) is invoked. The value of the global counter may be used for encryption and to identify the values associated with the second encrypted data 1064. Thus, the value may be referenced to identify MAC values associated with the second encrypted data 1064. In some embodiments, the global counter survives all power states of the GPU where sessions continue to remain alive. Additionally, the GPU stores the global counter (e.g., 64 Bits) as part of the identification data 1078. The global counter may be reset when the GPU gets reset in entirety.

[0370] The global counter may be used as a reference to identify appropriate MAC pages. In some embodiments, the MAC page in the protected memory 1056 includes the value of the global counter, 254 128 bit MAC values associated with the second encrypted data 1064 (e.g., a hash value), 128 bit MAC values of a previous MAC page in the protected memory 1056, and a 64 bit counter global counter lock value of the previous MAC page (e.g., 64 bits storing the counter value of the previous MAC page). It is worthwhile to note that the MAC page may be agnostic to the physical address in the second buffer 1080 that the second encrypted data 1064 is stored within. For example, since the same physical memory address of the second buffer 1080 may be paged-out several times before data is paged-in to the same physical memory address, the physical memory address may be bypassed from being stored in association with the MAC page since the physical memory address may be common to numerous page-in and page-out operations.

[0371] A root counter value (e.g., a value of the global counter) for the MAC page may be stored in an internal register of the GPU and in association with an identifier of the second encrypted data 1064. When the second encrypted data 1064 is later retrieved, it is required for the corresponding MAC page to be loaded into protected memory using the GPU. In detail, the protected memory may only hold a single MAC page. Using the MAC value from the MAC page in the protected memory, some embodiments may traverse through MAC pages that are linked together by identifying data (e.g., one MAC page contains the identification data to another MAC page).

[0372] The second encrypted data 1064 may be paged out to memory 1082. It is worthwhile to note that the storage location of the second encrypted data 1064 may be flexible to be stored in any memory and/or storage device. For example, in some embodiments, a long-term storage may be substituted for the memory 1082. In some embodiments, the identification data 1078 may be maintained in the protected memory 1056.

[0373] The process 1050 may page in the second encrypted data 1064, 1070. The process 1050 may ensure the relevant MAC entries (e.g., 256 MAC entries) associated with a main surface page (e.g., the page being paged in) is loaded in the MAC page prior to the actual paging operation for loading of data. For example, the GPU may reference the register to identify the appropriate root counter value, retrieve (or cause to be retrieved by software), the identification data 1078 from the protected memory 1056 and identify the MAC values.

[0374] The process 1050 may verify during decryption 1074 that the correct page is provided and generate the decrypted data 1084. For example, the GPU DMA engine may compare the MAC generated based on the retrieved data identified during the decryption operation to the expected value from the MAC page stored in associated with the identification data 1078. If the generated MAC does not match the MAC values associated with the second encrypted data 1064 retrieved from the MAC page, further operations based on the second encrypted data 1064 may be bypassed and remedied. As illustrated, the second encrypted data 1064 is decrypted to the first encrypted data 1052 in the host physical address based encryption domain. A GPU may decrypt the first encrypted data 1052 in some embodiments to obtain clear text data.

[0375] Thus, some embodiments provide enhanced memory usage while still maintaining security boundaries. For example, the GPU may still enforce security even while data is paged out of memory.

[0376] FIG. 9F illustrates a method 1090 to handle paging operations securely. The method 1090 may generally be implemented in conjunction with any of the embodiments described herein, such as, for example the process 1050 (FIG. 9E) already discussed. More particularly, the method 1090 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof. In some embodiments, aspects of method 1090 are implemented in a GPU.

[0377] Illustrated processing block 1092 identifies that encrypted data is in a first format (e.g., AES-XTS-HPA) that does not permit paging operations. Illustrated processing block 1106 determines that the CPU is not within a TCB. For example, an operation (e.g., digital-rights media operations) associated with the encrypted data may not include the CPU within the TCB. Thus, illustrated processing block 1106 identifies that operations associated with the encrypted data do not permit the CPU to access and decrypt the encrypted data. In some embodiments, if the CPU is within the TCB the method 1090 may cease without converting the encrypted data to a format compatible with paging.

[0378] Illustrated processing block 1096 identifies that a page-out operation will be executed. Illustrated processing block 1094 converts encrypted data to a second format (e.g., AEAD) that permits paging and generates a MAC page (e.g., based on the AEAD format) and stores the MAC page, and increments a global counter. Illustrated processing block 1108 identifies that the encrypted data will be paged-in with a main page.

[0379] Illustrated processing block 1102 retrieves the MAC page corresponding to the main page (that includes the paged out encrypted data). Illustrated processing block 1130 pages in the main page. Illustrated processing block 1112 determines if a stored MAC value of the encrypted data (e.g., as stored in the MAC page) matches (e.g., is the same as) a MAC value calculated based on the paged-in data from the main page. If so, illustrated processing block 1116 executes operations with the paged-in data. Otherwise, the retrieved data is not the same as the encrypted data that was paged-out. Thus, illustrated processing block 1114 bypasses operations with the paged-in data to enforce security protocols.

[0380] It is worthwhile to note that method 1090 may execute for each of a plurality of different data associated with different operations and may execute concurrently for each data that is to be paged-out. Thus, the global counter may be incremented numerous times based on data that is to be paged-out.

[0381] FIG. 9G illustrates a method 1120 of paging data. The method 1120 may generally be implemented in conjunction with any of the embodiments described herein, such as, for example the process 1050 (FIG. 9E) and/or the method 1090 (FIG. 9F) already discussed. More particularly, the method 1120 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof. In some embodiments, aspects of method 1120 are implemented in a GPU.

[0382] Illustrated processing block 1122 identifies that first data is in a first format, where the first format is in a physical address based encryption format. Illustrated processing block 1124 converts, with the graphics processor, the first data from the first format to a second format, where the second format is in a physical address agnostic encryption format. Illustrated processing block 1126 pages-out the first data, that is in the second format, from the memory to the non-volatile storage. In some embodiments, method 1120 increments a global counter in response to an identification that the first data will be paged-out. In some embodiments, method 1120 generates a message authentication code (MAC) value based on the first data that is in the second format. In some embodiments, method 1120 stores the MAC value and a value of the global counter in a protected memory.

[0383] In some embodiments, method 1120 pages-in second data from a storage, calculates a message authentication code (MAC) value based on the second data, and compares the MAC value of the second data to a MAC value t stored in the protected memory to determine whether the second data corresponds to the first data. Further, in some embodiments, method 1120 executes one or more operations based on the second data when the MAC value of the second data is the same as the MAC value of the first data, and/or bypasses one or more operations based on the second data when the MAC value of the second data is dissimilar from the MAC value of the first data.

[0384] System Overview

[0385] FIG. 10 is a block diagram of a processing system 1400, according to an embodiment. System 1400 may be used in a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 102 or processor cores 107. In one embodiment, the system 1400 is a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices such as within Internet-of-things (IoT) devices with wired or wireless connectivity to a local or wide area network.

[0386] In one embodiment, system 1400 can include, couple with, or be integrated within: a server-based gaming platform; a game console, including a game and media console; a mobile gaming console, a handheld game console, or an online game console. In some embodiments the system 1400 is part of a mobile phone, smart phone, tablet computing device or mobile Internet-connected device such as a laptop with low internal storage capacity. Processing system 1400 can also include, couple with, or be integrated within: a wearable device, such as a smart watch wearable device; smart eyewear or clothing enhanced with augmented reality (AR) or virtual reality (VR) features to provide visual, audio or tactile outputs to supplement real world visual, audio or tactile experiences or otherwise provide text, audio, graphics, video, holographic images or video, or tactile feedback; other augmented reality (AR) device; or other virtual reality (VR) device. In some embodiments, the processing system 1400 includes or is part of a television or set top box device. In one embodiment, system 1400 can include, couple with, or be integrated within a self-driving vehicle such as a bus, tractor trailer, car, motor or electric power cycle, plane or glider (or any combination thereof). The self-driving vehicle may use system 1400 to process the environment sensed around the vehicle.

[0387] In some embodiments, the one or more processors 1402 each include one or more processor cores 1407 to process instructions which, when executed, perform operations for system or user software. In some embodiments, at least one of the one or more processor cores 1407 is configured to process a specific instruction set 1409. In some embodiments, instruction set 1409 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). One or more processor cores 1407 may process a different instruction set 1409, which may include instructions to facilitate the emulation of other instruction sets. Processor core 1407 may also include other processing devices, such as a Digital Signal Processor (DSP).

[0388] In some embodiments, the processor 1402 includes cache memory 1404. Depending on the architecture, the processor 1402 can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor 1402. In some embodiments, the processor 1402 also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 1407 using known cache coherency techniques. A register file 1406 can be additionally included in processor 1402 and may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor 1402.

[0389] In some embodiments, one or more processor(s) 1402 are coupled with one or more interface bus(es) 1410 to transmit communication signals such as address, data, or control signals between processor 1402 and other components in the system 1400. The interface bus 1410, in one embodiment, can be a processor bus, such as a version of the Direct Media Interface (DMI) bus. However, processor busses are not limited to the DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., PCI, PCI express), memory busses, or other types of interface busses. In one embodiment the processor(s) 1402 include an integrated memory controller 1416 and a platform controller hub 1430. The memory controller 1416 facilitates communication between a memory device and other components of the system 1400, while the platform controller hub (PCH) 1430 provides connections to I/O devices via a local I/O bus.

[0390] The memory device 1420 can be a dynamic random-access memory (DRAM) device, a static random-access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In one embodiment the memory device 1420 can operate as system memory for the system 1400, to store data 1422 and instructions 1421 for use when the one or more processors 1402 executes an application or process. Memory controller 1416 also couples with an optional external graphics processor 1418, which may communicate with the one or more graphics processors 1408 in processors 1402 to perform graphics and media operations. In some embodiments, graphics, media, and or compute operations may be assisted by an accelerator 1412 which is a coprocessor that can be configured to perform a specialized set of graphics, media, or compute operations. For example, in one embodiment the accelerator 1412 is a matrix multiplication accelerator used to optimize machine learning or compute operations. In one embodiment the accelerator 1412 is a ray-tracing accelerator that can be used to perform ray-tracing operations in concert with the graphics processor 1408. In one embodiment, an external accelerator 1419 may be used in place of or in concert with the accelerator 1412.

[0391] In some embodiments a display device 1411 can connect to the processor(s) 1402. The display device 1411 can be one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). In one embodiment the display device 1411 can be a head mounted display (HMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications.

[0392] In some embodiments the platform controller hub 130 enables peripherals to connect to memory device 1420 and processor 1402 via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller 1446, a network controller 1434, a firmware interface 1428, a wireless transceiver 1426, touch sensors 1425, a data storage device 1424 (e.g., non-volatile memory, volatile memory, hard disk drive, flash memory, NAND, 3D NAND, 3D XPoint, etc.). The data storage device 1424 can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as a Peripheral Component Interconnect bus (e.g., PCI, PCI express). The touch sensors 1425 can include touch screen sensors, pressure sensors, or fingerprint sensors. The wireless transceiver 1426 can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, 5G, or Long-Term Evolution (LTE) transceiver. The firmware interface 1428 enables communication with system firmware, and can be, for example, a unified extensible firmware interface (UEFI). The network controller 1434 can enable a network connection to a wired network. In some embodiments, a high-performance network controller (not shown) couples with the interface bus 1410. The audio controller 1446, in one embodiment, is a multi-channel high definition audio controller. In one embodiment the system 1400 includes an optional legacy I/O controller 1440 for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. The platform controller hub 1430 can also connect to one or more Universal Serial Bus (USB) controllers 1442 connect input devices, such as keyboard and mouse 1443 combinations, a camera 1444, or other USB input devices.

[0393] It will be appreciated that the system 1400 shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, an instance of the memory controller 1416 and platform controller hub 1430 may be integrated into a discreet external graphics processor, such as the external graphics processor 1418. In one embodiment the platform controller hub 1430 and/or memory controller 1416 may be external to the one or more processor(s) 1402. For example, the system 1400 can include an external memory controller 1416 and platform controller hub 1430, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with the processor(s) 1402.

[0394] For example, circuit boards ("sleds") can be used on which components such as CPUs, memory, and other components are placed are designed for increased thermal performance. In some examples, processing components such as the processors are located on a top side of a sled while near memory, such as DIMMs, are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in a rack, thereby enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity.

[0395] A data center can utilize a single network architecture ("fabric") that supports multiple other network architectures including Ethernet and Omni-Path. The sleds can be coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center may, in use, pool resources, such as memory, accelerators (e.g., GPUs, graphics accelerators, FPGAs, ASICs, neural network and/or artificial intelligence accelerators, etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local.

[0396] A power supply or source can provide voltage and/or current to system 1400 or any component or system described herein. In one example, the power supply includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such AC power can be renewable energy (e.g., solar power) power source. In one example, power source includes a DC power source, such as an external AC to DC converter. In one example, power source or power supply includes wireless charging hardware to charge via proximity to a charging field. In one example, power source can include an internal battery, alternating current supply, motion-based power supply, solar power supply, or fuel cell source.

[0397] FIGS. 11A-11D illustrate computing systems and graphics processors provided by embodiments described herein. The elements of FIGS. 11A-11D having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.

[0398] FIG. 11A is a block diagram of an embodiment of a processor 1500 having one or more processor cores 1502A-1502N, an integrated memory controller 1514, and an integrated graphics processor 1508. Processor 1500 can include additional cores up to and including additional core 1502N represented by the dashed lined boxes. Each of processor cores 1502A-1502N includes one or more internal cache units 1504A-1504N. In some embodiments each processor core also has access to one or more shared cache units 1506. The internal cache units 1504A-1504N and shared cache units 1506 represent a cache memory hierarchy within the processor 1500. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units 1506 and 1504A-1504N.

[0399] In some embodiments, processor 1500 may also include a set of one or more bus controller units 1516 and a system agent core 1510. The one or more bus controller units 1516 manage a set of peripheral buses, such as one or more PCI or PCI express busses. System agent core 1510 provides management functionality for the various processor components. In some embodiments, system agent core 1510 includes one or more integrated memory controllers 1514 to manage access to various external memory devices (not shown).

[0400] In some embodiments, one or more of the processor cores 1502A-1502N include support for simultaneous multi-threading. In such embodiment, the system agent core 1510 includes components for coordinating and operating cores 1502A-1502N during multi-threaded processing. System agent core 1510 may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores 1502A-1502N and graphics processor 1508.

[0401] In some embodiments, processor 1500 additionally includes graphics processor 1508 to execute graphics processing operations. In some embodiments, the graphics processor 1508 couples with the set of shared cache units 1506, and the system agent core 1510, including the one or more integrated memory controllers 1514. In some embodiments, the system agent core 1510 also includes a display controller 1511 to drive graphics processor output to one or more coupled displays. In some embodiments, display controller 1511 may also be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor 1508.

[0402] In some embodiments, a ring-based interconnect unit 1512 is used to couple the internal components of the processor 1500. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some embodiments, graphics processor 1508 couples with the ring interconnect 1512 via an I/O link 1513.

[0403] The exemplary I/O link 1513 represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module 1518, such as an eDRAM module. In some embodiments, each of the processor cores 1502A-1502N and graphics processor 1508 can use embedded memory modules 1518 as a shared Last Level Cache.

[0404] In some embodiments, processor cores 1502A-1502N are homogenous cores executing the same instruction set architecture. In another embodiment, processor cores 1502A-1502N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores 1502A-1502N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment, processor cores 1502A-1502N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. In one embodiment, processor cores 1502A-1502N are heterogeneous in terms of computational capability. Additionally, processor 1500 can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components.

[0405] FIG. 11B is a block diagram of hardware logic of a graphics processor core 1519, according to some embodiments described herein. Elements of FIG. 11B having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. The graphics processor core 1519, sometimes referred to as a core slice, can be one or multiple graphics cores within a modular graphics processor. The graphics processor core 1519 is exemplary of one graphics core slice, and a graphics processor as described herein may include multiple graphics core slices based on target power and performance envelopes. Each graphics processor core 1519 can include a fixed function block 1530 coupled with multiple sub-cores 1521A-1521F, also referred to as sub-slices, that include modular blocks of general-purpose and fixed function logic.

[0406] In some embodiments, the fixed function block 1530 includes a geometry/fixed function pipeline 1531 that can be shared by all sub-cores in the graphics processor core 1519, for example, in lower performance and/or lower power graphics processor implementations. In various embodiments, the geometry/fixed function pipeline 1531 includes a 3D fixed function pipeline (e.g., 3D pipeline 1612 as in FIG. 13, described below) a video front-end unit, a thread spawner and thread dispatcher, and a unified return buffer manager, which manages unified return buffers (e.g., unified return buffer 1718 in FIG. 13, as described below).

[0407] In one embodiment the fixed function block 1530 also includes a graphics SoC interface 1532, a graphics microcontroller 1533, and a media pipeline 1534. The graphics SoC interface 1532 provides an interface between the graphics processor core 1519 and other processor cores within a system on a chip integrated circuit. The graphics microcontroller 1533 is a programmable sub-processor that is configurable to manage various functions of the graphics processor core 1519, including thread dispatch, scheduling, and pre-emption. The media pipeline 1534 (e.g., media pipeline 1616 of FIG. 12A) includes logic to facilitate the decoding, encoding, pre-processing, and/or post-processing of multimedia data, including image and video data. The media pipeline 1534 implement media operations via requests to compute or sampling logic within the sub-cores 1521-1521F.

[0408] In one embodiment the SoC interface 1532 enables the graphics processor core 1519 to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC, including memory hierarchy elements such as a shared last level cache memory, the system RAM, and/or embedded on-chip or on-package DRAM. The SoC interface 1532 can also enable communication with fixed function devices within the SoC, such as camera imaging pipelines, and enables the use of and/or implements global memory atomics that may be shared between the graphics processor core 1519 and CPUs within the SoC. The SoC interface 1532 can also implement power management controls for the graphics processor core 1519 and enable an interface between a clock domain of the graphic core 1519 and other clock domains within the SoC. In one embodiment the SoC interface 1532 enables receipt of command buffers from a command streamer and global thread dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. The commands and instructions can be dispatched to the media pipeline 1534, when media operations are to be performed, or a geometry and fixed function pipeline (e.g., geometry and fixed function pipeline 1531, geometry and fixed function pipeline 1537) when graphics processing operations are to be performed.

[0409] The graphics microcontroller 1533 can be configured to perform various scheduling and management tasks for the graphics processor core 1519. In one embodiment the graphics microcontroller 1533 can perform graphics and/or compute workload scheduling on the various graphics parallel engines within execution unit (EU) arrays 1522A-1522F, 1524A-1524F within the sub-cores 1521A-1521F. In this scheduling model, host software executing on a CPU core of an SoC including the graphics processor core 1519 can submit workloads one of multiple graphic processor doorbells, which invokes a scheduling operation on the appropriate graphics engine. Scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. In one embodiment the graphics microcontroller 1533 can also facilitate low-power or idle states for the graphics processor core 1519, providing the graphics processor core 1519 with the ability to save and restore registers within the graphics processor core 1519 across low-power state transitions independently from the operating system and/or graphics driver software on the system.

[0410] The graphics processor core 1519 may have greater than or fewer than the illustrated sub-cores 1521A-1521F, up to N modular sub-cores. For each set of N sub-cores, the graphics processor core 1519 can also include shared function logic 1535, shared and/or cache memory 1536, a geometry/fixed function pipeline 1537, as well as additional fixed function logic (not shown) to accelerate various graphics and compute processing operations. The shared function logic 1535 can include logic units associated with the shared function logic 1720 of FIG. 13 (e.g., sampler, math, and/or inter-thread communication logic) that can be shared by each N sub-cores within the graphics processor core 1519. The shared and/or cache memory 1536 can be a last-level cache for the set of N sub-cores 1521A-1521F within the graphics processor core 1519, and can also serve as shared memory that is accessible by multiple sub-cores. The geometry/fixed function pipeline 1537 can be included instead of the geometry/fixed function pipeline 1531 within the fixed function block 1530 and can include the same or similar logic units.

[0411] In one embodiment the graphics processor core 1519 includes additional fixed function logic that can include various fixed function acceleration logic for use by the graphics processor core 1519. In one embodiment the additional fixed function logic includes an additional geometry pipeline for use in position only shading. In position-only shading, two geometry pipelines exist, the full geometry pipeline within the geometry/fixed function pipeline 238, 1531, and a cull pipeline, which is an additional geometry pipeline which may be included within the additional fixed function logic 238. In one embodiment the cull pipeline is a trimmed down version of the full geometry pipeline. The full pipeline and the cull pipeline can execute different instances of the same application, each instance having a separate context. Position only shading can hide long cull runs of discarded triangles, enabling shading to be completed earlier in some instances. For example and in one embodiment the cull pipeline logic within the additional fixed function logic can execute position shaders in parallel with the main application and generally generates critical results faster than the full pipeline, as the cull pipeline fetches and shades only the position attribute of the vertices, without performing rasterization and rendering of the pixels to the frame buffer. The cull pipeline can use the generated critical results to compute visibility information for all the triangles without regard to whether those triangles are culled. The full pipeline (which in this instance may be referred to as a replay pipeline) can consume the visibility information to skip the culled triangles to shade only the visible triangles that are finally passed to the rasterization phase.

[0412] In one embodiment the additional fixed function logic can also include machine-learning acceleration logic, such as fixed function matrix multiplication logic, for implementations including optimizations for machine learning training or inferencing.

[0413] Within each graphics sub-core 1521A-1521F includes a set of execution resources that may be used to perform graphics, media, and compute operations in response to requests by graphics pipeline, media pipeline, or shader programs. The graphics sub-cores 1521A-1521F include multiple EU arrays 1522A-1522F, 1524A-1524F, thread dispatch and inter-thread communication (TD/IC) logic 1523A-1523F, a 3D (e.g., texture) sampler 1525A-1525F, a media sampler 1507A-1507F, a shader processor 1527A-1527F, and shared local memory (SLM) 1528A-1528F. The EU arrays 1522A-1522F, 1524A-1524F each include multiple execution units, which are general-purpose graphics processing units capable of performing floating-point and integer/fixed-point logic operations in service of a graphics, media, or compute operation, including graphics, media, or compute shader programs. The TD/IC logic 1523A-1523F performs local thread dispatch and thread control operations for the execution units within a sub-core and facilitate communication between threads executing on the execution units of the sub-core. The 3D sampler 1525A-1525F can read texture or other 3D graphics related data into memory. The 3D sampler can read texture data differently based on a configured sample state and the texture format associated with a given texture. The media sampler 1507A-1507F can perform similar read operations based on the type and format associated with media data. In one embodiment, each graphics sub-core 1521A-1521F can alternately include a unified 3D and media sampler. Threads executing on the execution units within each of the sub-cores 1521A-1521F can make use of shared local memory 1528A-1528F within each sub-core, to enable threads executing within a thread group to execute using a common pool of on-chip memory.

[0414] FIG. 11C illustrates a graphics processing unit (GPU) 1539 that includes dedicated sets of graphics processing resources arranged into multi-core groups 1540A-1540N. While the details of only a single multi-core group 1540A are provided, it will be appreciated that the other multi-core groups 1540B-1540N may be equipped with the same or similar sets of graphics processing resources.

[0415] As illustrated, a multi-core group 1540A may include a set of graphics cores 1543, a set of tensor cores 1544, and a set of ray tracing cores 1545. A scheduler/dispatcher 1541 schedules and dispatches the graphics threads for execution on the various cores 1543, 1544, 1545. A set of register files 1542 store operand values used by the cores 1543, 1544, 1545 when executing the graphics threads. These may include, for example, integer registers for storing integer values, floating point registers for storing floating point values, vector registers for storing packed data elements (integer and/or floating point data elements) and tile registers for storing tensor/matrix values. In one embodiment, the tile registers are implemented as combined sets of vector registers.

[0416] One or more combined level 1 (L1) caches and shared memory units 1547 store graphics data such as texture data, vertex data, pixel data, ray data, bounding volume data, etc., locally within each multi-core group 1540A. One or more texture units 1547 can also be used to perform texturing operations, such as texture mapping and sampling. A Level 2 (L2) cache 1553 shared by all or a subset of the multi-core groups 1540A-1540N stores graphics data and/or instructions for multiple concurrent graphics threads. As illustrated, the L2 cache 1553 may be shared across a plurality of multi-core groups 1540A-1540N. One or more memory controllers 1548 couple the GPU 1539 to a memory 1549 which may be a system memory (e.g., DRAM) and/or a dedicated graphics memory (e.g., GDDR6 memory).

[0417] Input/output (I/O) circuitry 1550 couples the GPU 1539 to one or more I/O devices 1552 such as digital signal processors (DSPs), network controllers, or user input devices. An on-chip interconnect may be used to couple the I/O devices 1552 to the GPU 1539 and memory 1549. One or more I/O memory management units (IOMMUs) 1551 of the I/O circuitry 1550 couple the I/O devices 1552 directly to the system memory 1549. In one embodiment, the IOMMU 1551 manages multiple sets of page tables to map virtual addresses to physical addresses in system memory 1549. In this embodiment, the I/O devices 1552, CPU(s) 1546, and GPU(s) 1539 may share the same virtual address space.

[0418] In one implementation, the IOMMU 1551 supports virtualization. In this case, it may manage a first set of page tables to map guest/graphics virtual addresses to guest/graphics physical addresses and a second set of page tables to map the guest/graphics physical addresses to system/host physical addresses (e.g., within system memory 1549). The base addresses of each of the first and second sets of page tables may be stored in control registers and swapped out on a context switch (e.g., so that the new context is provided with access to the relevant set of page tables). While not illustrated in FIG. 11C, each of the cores 1543, 1544, 1545 and/or multi-core groups 1540A-1540N may include translation lookaside buffers (TLBs) to cache guest virtual to guest physical translations, guest physical to host physical translations, and guest virtual to host physical translations.

[0419] In one embodiment, the CPUs 1546, GPUs 1539, and I/O devices 1552 are integrated on a single semiconductor chip and/or chip package. The illustrated memory 1549 may be integrated on the same chip or may be coupled to the memory controllers 1548 via an off-chip interface. In one implementation, the memory 1549 comprises GDDR6 memory which shares the same virtual address space as other physical system-level memories, although the underlying principles of the embodiment are not limited to this specific implementation.

[0420] In one embodiment, the tensor cores 1544 include a plurality of execution units specifically designed to perform matrix operations, which are the fundamental compute operation used to perform deep learning operations. For example, simultaneous matrix multiplication operations may be used for neural network training and inferencing. The tensor cores 1544 may perform matrix processing using a variety of operand precisions including single precision floating-point (e.g., 32 bits), half-precision floating point (e.g., 16 bits), integer words (16 bits), bytes (8 bits), and half-bytes (4 bits). In one embodiment, a neural network implementation extracts features of each rendered scene, potentially combining details from multiple frames, to construct a high-quality final image.

[0421] In deep learning implementations, parallel matrix multiplication work may be scheduled for execution on the tensor cores 1544. The training of neural networks, in particular, requires a significant number matrix dot product operations. In order to process an inner-product formulation of an N.times.N.times.N matrix multiply, the tensor cores 1544 may include at least N dot-product processing elements. Before the matrix multiply begins, one entire matrix is loaded into tile registers and at least one column of a second matrix is loaded each cycle for N cycles. Each cycle, there are N dot products that are processed.

[0422] Matrix elements may be stored at different precisions depending on the particular implementation, including 16-bit words, 8-bit bytes (e.g., INT8) and 4-bit half-bytes (e.g., INT4). Different precision modes may be specified for the tensor cores 1544 to ensure that the most efficient precision is used for different workloads (e.g., such as inferencing workloads which can tolerate quantization to bytes and half-bytes).

[0423] In one embodiment, the ray tracing cores 1545 accelerate ray tracing operations for both real-time ray tracing and non-real-time ray tracing implementations. In particular, the ray tracing cores 1545 include ray traversal/intersection circuitry for performing ray traversal using bounding volume hierarchies (BVHs) and identifying intersections between rays and primitives enclosed within the BVH volumes. The ray tracing cores 1545 may also include circuitry for performing depth testing and culling (e.g., using a Z buffer or similar arrangement). In one implementation, the ray tracing cores 1545 perform traversal and intersection operations in concert with the image denoising techniques described herein, at least a portion of which may be executed on the tensor cores 1544. For example, in one embodiment, the tensor cores 1544 implement a deep learning neural network to perform denoising of frames generated by the ray tracing cores 1545. However, the CPU(s) 1546, graphics cores 1543, and/or ray tracing cores 1545 may also implement all or a portion of the denoising and/or deep learning algorithms.

[0424] In addition, as described above, a distributed approach to denoising may be employed in which the GPU 1539 is in a computing device coupled to other computing devices over a network or high speed interconnect. In this embodiment, the interconnected computing devices share neural network learning/training data to improve the speed with which the overall system learns to perform denoising for different types of image frames and/or different graphics applications.

[0425] In one embodiment, the ray tracing cores 1545 process all BVH traversal and ray-primitive intersections, saving the graphics cores 1543 from being overloaded with thousands of instructions per ray. In one embodiment, each ray tracing core 1545 includes a first set of specialized circuitry for performing bounding box tests (e.g., for traversal operations) and a second set of specialized circuitry for performing the ray-triangle intersection tests (e.g., intersecting rays which have been traversed). Thus, in one embodiment, the multi-core group 1540A can simply launch a ray probe, and the ray tracing cores 1545 independently perform ray traversal and intersection and return hit data (e.g., a hit, no hit, multiple hits, etc.) to the thread context. The other cores 1543, 1544 are freed to perform other graphics or compute work while the ray tracing cores 1545 perform the traversal and intersection operations.

[0426] In one embodiment, each ray tracing core 1545 includes a traversal unit to perform BVH testing operations and an intersection unit which performs ray-primitive intersection tests. The intersection unit generates a "hit", "no hit", or "multiple hit" response, which it provides to the appropriate thread. During the traversal and intersection operations, the execution resources of the other cores (e.g., graphics cores 1543 and tensor cores 1544) are freed to perform other forms of graphics work.

[0427] In one particular embodiment described below, a hybrid rasterization/ray tracing approach is used in which work is distributed between the graphics cores 1543 and ray tracing cores 1545.

[0428] In one embodiment, the ray tracing cores 1545 (and/or other cores 1543, 1544) include hardware support for a ray tracing instruction set such as Microsoft's DirectX Ray Tracing (DXR) which includes a DispatchRays command, as well as ray-generation, closest-hit, any-hit, and miss shaders, which enable the assignment of unique sets of shaders and textures for each object. Another ray tracing platform which may be supported by the ray tracing cores 1545, graphics cores 1543 and tensor cores 1544 is Vulkan 1.1.85. Note, however, that the underlying principles of the embodiments are not limited to any particular ray tracing ISA.

[0429] In general, the various cores 1545, 1544, 1543 may support a ray tracing instruction set that includes instructions/functions for ray generation, closest hit, any hit, ray-primitive intersection, per-primitive and hierarchical bounding box construction, miss, visit, and exceptions. More specifically, one embodiment includes ray tracing instructions to perform the following functions:

[0430] Ray Generation--Ray generation instructions may be executed for each pixel, sample, or other user-defined work assignment.

[0431] Closest Hit--A closest hit instruction may be executed to locate the closest intersection point of a ray with primitives within a scene.

[0432] Any Hit--An any hit instruction identifies multiple intersections between a ray and primitives within a scene, potentially to identify a new closest intersection point.

[0433] Intersection--An intersection instruction performs a ray-primitive intersection test and outputs a result.

[0434] Per-primitive Bounding box Construction--This instruction builds a bounding box around a given primitive or group of primitives (e.g., when building a new BVH or other acceleration data structure).

[0435] Miss--Indicates that a ray misses all geometry within a scene, or specified region of a scene.

[0436] Visit--Indicates the children volumes a ray will traverse.

[0437] Exceptions--Includes various types of exception handlers (e.g., invoked for various error conditions).

[0438] FIG. 11D is a block diagram of general purpose graphics processing unit (GPGPU) 1570 that can be configured as a graphics processor and/or compute accelerator, according to embodiments described herein. The GPGPU 1570 can interconnect with host processors (e.g., one or more CPU(s) 1546) and memory 1571, 1572 via one or more system and/or memory busses. In one embodiment the memory 1571 is system memory that may be shared with the one or more CPU(s) 1546, while memory 1572 is device memory that is dedicated to the GPGPU 1570. In one embodiment, components within the GPGPU 1570 and device memory 1572 may be mapped into memory addresses that are accessible to the one or more CPU(s) 1546. Access to memory 1571 and 1572 may be facilitated via a memory controller 1568. In one embodiment the memory controller 1568 includes an internal direct memory access (DMA) controller 1569 or can include logic to perform operations that would otherwise be performed by a DMA controller.

[0439] The GPGPU 1570 includes multiple cache memories, including an L2 cache 1553, L1 cache 1554, an instruction cache 1555, and shared memory 1556, at least a portion of which may also be partitioned as a cache memory. The GPGPU 1570 also includes multiple compute units 1560A-1560N. Each compute unit 1560A-1560N includes a set of vector registers 1561, scalar registers 1562, vector logic units 1563, and scalar logic units 1564. The compute units 1560A-1560N can also include local shared memory 1565 and a program counter 1566. The compute units 1560A-1560N can couple with a constant cache 1567, which can be used to store constant data, which is data that will not change during the run of kernel or shader program that executes on the GPGPU 1570. In one embodiment the constant cache 1567 is a scalar data cache and cached data can be fetched directly into the scalar registers 1562.

[0440] During operation, the one or more CPU(s) 1546 can write commands into registers or memory in the GPGPU 1570 that has been mapped into an accessible address space. The command processors 1557 can read the commands from registers or memory and determine how those commands will be processed within the GPGPU 1570. A thread dispatcher 1558 can then be used to dispatch threads to the compute units 1560A-1560N to perform those commands. Each compute unit 1560A-1560N can execute threads independently of the other compute units. Additionally each compute unit 1560A-1560N can be independently configured for conditional computation and can conditionally output the results of computation to memory. The command processors 1557 can interrupt the one or more CPU(s) 1546 when the submitted commands are complete.

[0441] FIGS. 12A-12B illustrate block diagrams of additional graphics processor and compute accelerator architectures provided by embodiments described herein. The elements of FIGS. 12A-12B having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.

[0442] FIG. 12A is a block diagram of a graphics processor 1600, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores, or other semiconductor devices such as, but not limited to, memory devices or network interfaces. In some embodiments, the graphics processor communicates via a memory mapped I/O interface to registers on the graphics processor and with commands placed into the processor memory. In some embodiments, graphics processor 1600 includes a memory interface 1614 to access memory. Memory interface 1614 can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.

[0443] In some embodiments, graphics processor 1600 also includes a display controller 1602 to drive display output data to a display device 1618. Display controller 1602 includes hardware for one or more overlay planes for the display and composition of multiple layers of video or user interface elements. The display device 1618 can be an internal or external display device. In one embodiment the display device 1618 is a head mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. In some embodiments, graphics processor 1600 includes a video codec engine 1606 to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, H.265/HEVC, Alliance for Open Media (AOMedia) VP8, VP9, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats.

[0444] In some embodiments, graphics processor 1600 includes a block image transfer (BLIT) engine 1604 to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE) 1610. In some embodiments, GPE 1610 is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.

[0445] In some embodiments, GPE 1610 includes a 3D pipeline 1612 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline 1612 includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media sub-system 1615. While 3D pipeline 1612 can be used to perform media operations, an embodiment of GPE 1610 also includes a media pipeline 1616 that is specifically used to perform media operations, such as video post-processing and image enhancement.

[0446] In some embodiments, media pipeline 1616 includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine 1606. In some embodiments, media pipeline 1616 additionally includes a thread spawning unit to spawn threads for execution on 3D/Media sub-system 1615. The spawned threads perform computations for the media operations on one or more graphics execution units included in 3D/Media sub-system 1615.

[0447] In some embodiments, 3D/Media subsystem 1615 includes logic for executing threads spawned by 3D pipeline 1612 and media pipeline 1616. In one embodiment, the pipelines send thread execution requests to 3D/Media subsystem 1615, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics execution units to process the 3D and media threads. In some embodiments, 3D/Media subsystem 1615 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, to share data between threads and to store output data.

[0448] FIG. 12B illustrates a graphics processor 1620 having a tiled architecture, according to embodiments described herein. In one embodiment the graphics processor 1620 includes a graphics processing engine cluster 1622 having multiple instances of the graphics processing engine 1610 of FIG. 12A within a graphics engine tile 1610A-1610D. Each graphics engine tile 1610A-1610D can be interconnected via a set of tile interconnects 1623A-1623F. Each graphics engine tile 1610A-1610D can also be connected to a memory module or memory device 1626A-1626D via memory interconnects 1625A-1625D. The memory devices 1626A-1626D can use any graphics memory technology. For example, the memory devices 1626A-1626D may be graphics double data rate (GDDR) memory. The memory devices 1626A-1626D, in one embodiment, are high-bandwidth memory (HBM) modules that can be on-die with their respective graphics engine tile 1610A-1610D. In one embodiment the memory devices 1626A-1626D are stacked memory devices that can be stacked on top of their respective graphics engine tile 1610A-1610D. In one embodiment, each graphics engine tile 1610A-1610D and associated memory 1626A-1626D reside on separate chiplets, which are bonded to a base die or base substrate, as described on further detail in FIGS. 20B-20D.

[0449] The graphics processing engine cluster 1622 can connect with an on-chip or on-package fabric interconnect 1624. The fabric interconnect 1624 can enable communication between graphics engine tiles 1610A-1610D and components such as the video codec 1606 and one or more copy engines 1604. The copy engines 1604 can be used to move data out of, into, and between the memory devices 1626A-1626D and memory that is external to the graphics processor 1620 (e.g., system memory). The fabric interconnect 1624 can also be used to interconnect the graphics engine tiles 1610A-1610D. The graphics processor 1620 may optionally include a display controller 1602 to enable a connection with an external display device 1618. The graphics processor may also be configured as a graphics or compute accelerator. In the accelerator configuration, the display controller 1602 and display device 1618 may be omitted.

[0450] The graphics processor 1620 can connect to a host system via a host interface 1628. The host interface 1628 can enable communication between the graphics processor 1620, system memory, and/or other system components. The host interface 1628 can be, for example a PCI express bus or another type of host system interface.

[0451] FIG. 12C illustrates a compute accelerator 1630, according to embodiments described herein. The compute accelerator 1630 can include architectural similarities with the graphics processor 1620 of FIG. 12B and is optimized for compute acceleration. A compute engine cluster 1632 can include a set of compute engine tiles 1640A-1640D that include execution logic that is optimized for parallel or vector-based general-purpose compute operations. In some embodiments, the compute engine tiles 1640A-1640D do not include fixed function graphics processing logic, although in one embodiment one or more of the compute engine tiles 1640A-1640D can include logic to perform media acceleration. The compute engine tiles 1640A-1640D can connect to memory 1626A-1626D via memory interconnects 1625A-1625D. The memory 1626A-1626D and memory interconnects 1625A-1625D may be similar technology as in graphics processor 1620, or can be different. The graphics compute engine tiles 1640A-1640D can also be interconnected via a set of tile interconnects 1623A-1623F and may be connected with and/or interconnected by a fabric interconnect 1624. In one embodiment the compute accelerator 1630 includes a large L3 cache 1636 that can be configured as a device-wide cache. In some embodiments, the compute accelerator 1630 may encrypt data in a format that permits paging prior to storing data outside the compute engine cluster 1632 as described encryption conversion scheme with a paging process 1050 (FIG. 9E), the method 1090 (FIG. 9F) and method 1120 (FIG. 9G). The compute accelerator 1630 can also connect to a host processor and memory via a host interface 1628 in a similar manner as the graphics processor 1620 of FIG. 12B.

[0452] Graphics Processing Engine

[0453] FIG. 13 is a block diagram of a graphics processing engine 1710 of a graphics processor in accordance with some embodiments. In one embodiment, the graphics processing engine (GPE) 1710 is a version of the GPE 310 shown in FIG. 12A, and may also represent a graphics engine tile 310A-310D of FIG. 12B. Elements of FIG. 13 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. For example, the 3D pipeline 312 and media pipeline 316 of FIG. 12A are illustrated. The media pipeline 316 is optional in some embodiments of the GPE 1710 and may not be explicitly included within the GPE 1710. For example and in at least one embodiment, a separate media and/or image processor is coupled to the GPE 1710.

[0454] In some embodiments, GPE 1710 couples with or includes a command streamer 1703, which provides a command stream to the 3D pipeline 312 and/or media pipelines 316. In some embodiments, command streamer 1703 is coupled with memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer 1703 receives commands from the memory and sends the commands to 3D pipeline 312 and/or media pipeline 316. The commands are directives fetched from a ring buffer, which stores commands for the 3D pipeline 312 and media pipeline 316. In one embodiment, the ring buffer can additionally include batch command buffers storing batches of multiple commands. The commands for the 3D pipeline 312 can also include references to data stored in memory, such as but not limited to vertex and geometry data for the 3D pipeline 312 and/or image data and memory objects for the media pipeline 316. The 3D pipeline 312 and media pipeline 316 process the commands and data by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to a graphics core array 1714. In one embodiment the graphics core array 1714 include one or more blocks of graphics cores (e.g., graphics core(s) 1715A, graphics core(s) 1715B), each block including one or more graphics cores. Each graphics core includes a set of graphics execution resources that includes general-purpose and graphics specific execution logic to perform graphics and compute operations, as well as fixed function texture processing and/or machine learning and artificial intelligence acceleration logic.

[0455] In various embodiments the 3D pipeline 312 can include fixed function and programmable logic to process one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader programs, by processing the instructions and dispatching execution threads to the graphics core array 1714. The graphics core array 1714 provides a unified block of execution resources for use in processing these shader programs. Multi-purpose execution logic (e.g., execution units) within the graphics core(s) 1715A-1714B of the graphic core array 1714 includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders.

[0456] In some embodiments, the graphics core array 1714 includes execution logic to perform media functions, such as video and/or image processing. In one embodiment, the execution units include general-purpose logic that is programmable to perform parallel general-purpose computational operations, in addition to graphics processing operations. The general-purpose logic can perform processing operations in parallel or in conjunction with general-purpose logic within the processor core(s) 1407 of FIG. 10 or core 1502A-1502N as in FIG. 11A.

[0457] Output data generated by threads executing on the graphics core array 1714 can output data to memory in a unified return buffer (URB) 1718. The URB 1718 can store data for multiple threads. In some embodiments the URB 1718 may be used to send data between different threads executing on the graphics core array 1714. In some embodiments the URB 1718 may additionally be used for synchronization between threads on the graphics core array and fixed function logic within the shared function logic 1720.

[0458] In some embodiments, graphics core array 1714 is scalable, such that the array includes a variable number of graphics cores, each having a variable number of execution units based on the target power and performance level of GPE 1710. In one embodiment the execution resources are dynamically scalable, such that execution resources may be enabled or disabled as needed.

[0459] The graphics core array 1714 couples with shared function logic 1720 that includes multiple resources that are shared between the graphics cores in the graphics core array. The shared functions within the shared function logic 1720 are hardware logic units that provide specialized supplemental functionality to the graphics core array 1714. In various embodiments, shared function logic 1720 includes but is not limited to sampler 1721, math 1722, and inter-thread communication (ITC) 1723 logic. Additionally, some embodiments implement one or more cache(s) 1725 within the shared function logic 1720.

[0460] A shared function is implemented at least in a case where the demand for a given specialized function is insufficient for inclusion within the graphics core array 1714. Instead a single instantiation of that specialized function is implemented as a stand-alone entity in the shared function logic 1720 and shared among the execution resources within the graphics core array 1714. The precise set of functions that are shared between the graphics core array 1714 and included within the graphics core array 1714 varies across embodiments. In some embodiments, specific shared functions within the shared function logic 1720 that are used extensively by the graphics core array 1714 may be included within shared function logic 1716 within the graphics core array 1714. In various embodiments, the shared function logic 1716 within the graphics core array 1714 can include some or all logic within the shared function logic 1720. In one embodiment, all logic elements within the shared function logic 1720 may be duplicated within the shared function logic 1716 of the graphics core array 1714. In one embodiment the shared function logic 1720 is excluded in favor of the shared function logic 1716 within the graphics core array 1714.

[0461] Execution Units

[0462] FIGS. 14A-14B illustrate thread execution logic 1800 including an array of processing elements employed in a graphics processor core according to embodiments described herein. Elements of FIGS. 14A-14B having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. FIG. 14A-14B illustrates an overview of thread execution logic 1800, which may be representative of hardware logic illustrated with each sub-core 221A-221F of FIG. 11B. FIG. 14A is representative of an execution unit within a general-purpose graphics processor, while FIG. 14B is representative of an execution unit that may be used within a compute accelerator.

[0463] As illustrated in FIG. 14A, in some embodiments thread execution logic 1800 includes a shader processor 1802, a thread dispatcher 1804, instruction cache 1806, a scalable execution unit array including a plurality of execution units 1808A-1808N, a sampler 1810, shared local memory 1811, a data cache 1812, and a data port 1814. In one embodiment the scalable execution unit array can dynamically scale by enabling or disabling one or more execution units (e.g., any of execution units 1808A, 1808B, 1808C, 1808D, through 1808N-1 and 1808N) based on the computational requirements of a workload. In one embodiment the included components are interconnected via an interconnect fabric that links to each of the components. In some embodiments, thread execution logic 1800 includes one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache 1806, data port 1814, sampler 1810, and execution units 1808A-1808N. In some embodiments, each execution unit (e.g., 1808A) is a stand-alone programmable general-purpose computational unit that is capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In various embodiments, the array of execution units 1808A-1808N is scalable to include any number individual execution units.

[0464] In some embodiments, the execution units 1808A-1808N are primarily used to execute shader programs. A shader processor 1802 can process the various shader programs and dispatch execution threads associated with the shader programs via a thread dispatcher 1804. In one embodiment the thread dispatcher includes logic to arbitrate thread initiation requests from the graphics and media pipelines and instantiate the requested threads on one or more execution unit in the execution units 1808A-1808N. For example, a geometry pipeline can dispatch vertex, tessellation, or geometry shaders to the thread execution logic for processing. In some embodiments, thread dispatcher 1804 can also process runtime thread spawning requests from the executing shader programs.

[0465] In some embodiments, the execution units 1808A-1808N support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. The execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders). Each of the execution units 1808A-1808N is capable of multi-issue single instruction multiple data (SIMD) execution and multi-threaded operation enables an efficient execution environment in the face of higher latency memory accesses. Each hardware thread within each execution unit has a dedicated high-bandwidth register file and associated independent thread-state. Execution is multi-issue per clock to pipelines capable of integer, single and double precision floating point operations, SIMD branch capability, logical operations, transcendental operations, and other miscellaneous operations. While waiting for data from memory or one of the shared functions, dependency logic within the execution units 1808A-1808N causes a waiting thread to sleep until the requested data has been returned. While the waiting thread is sleeping, hardware resources may be devoted to processing other threads. For example, during a delay associated with a vertex shader operation, an execution unit can perform operations for a pixel shader, fragment shader, or another type of shader program, including a different vertex shader. Various embodiments can apply to use execution by use of Single Instruction Multiple Thread (SIMT) as an alternate to use of SIMD or in addition to use of SIMD. Reference to a SIMD core or operation can apply also to SIMT or apply to SIMD in combination with SIMT.

[0466] Each execution unit in execution units 1808A-1808N operates on arrays of data elements. The number of data elements is the "execution size," or the number of channels for the instruction. An execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs) or Floating Point Units (FPUs) for a particular graphics processor. In some embodiments, execution units 1808A-1808N support integer and floating-point data types.

[0467] The execution unit instruction set includes SIMD instructions. The various data elements can be stored as a packed data type in a register and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in a register and the execution unit operates on the vector as four separate 54-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, different vector widths and register sizes are possible.

[0468] In one embodiment one or more execution units can be combined into a fused execution unit 1809A-1809N having thread control logic (1807A-1807N) that is common to the fused EUs. Multiple EUs can be fused into an EU group. Each EU in the fused EU group can be configured to execute a separate SIMD hardware thread. The number of EUs in a fused EU group can vary according to embodiments. Additionally, various SIMD widths can be performed per-EU, including but not limited to SIMD8, SIMD16, and SIMD32. Each fused graphics execution unit 1809A-1809N includes at least two execution units. For example, fused execution unit 1809A includes a first EU 1808A, second EU 1808B, and thread control logic 1807A that is common to the first EU 1808A and the second EU 1808B. The thread control logic 1807A controls threads executed on the fused graphics execution unit 1809A, allowing each EU within the fused execution units 1809A-1809N to execute using a common instruction pointer register.

[0469] One or more internal instruction caches (e.g., 1806) are included in the thread execution logic 1800 to cache thread instructions for the execution units. In some embodiments, one or more data caches (e.g., 1812) are included to cache thread data during thread execution. Threads executing on the execution logic 1800 can also store explicitly managed data in the shared local memory 1811. In some embodiments, a sampler 1810 is included to provide texture sampling for 3D operations and media sampling for media operations. In some embodiments, sampler 1810 includes specialized texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to an execution unit.

[0470] During execution, the graphics and media pipelines send thread initiation requests to thread execution logic 1800 via thread spawning and dispatch logic. Once a group of geometric objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within the shader processor 1802 is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). In some embodiments, a pixel shader or fragment shader calculates the values of the various vertex attributes that are to be interpolated across the rasterized object. In some embodiments, pixel processor logic within the shader processor 1802 then executes an application programming interface (API)-supplied pixel or fragment shader program. To execute the shader program, the shader processor 1802 dispatches threads to an execution unit (e.g., 1808A) via thread dispatcher 1804. In some embodiments, shader processor 1802 uses texture sampling logic in the sampler 1810 to access texture data in texture maps stored in memory. Arithmetic operations on the texture data and the input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing.

[0471] In some embodiments, the data port 1814 provides a memory access mechanism for the thread execution logic 1800 to output processed data to memory for further processing on a graphics processor output pipeline. In some embodiments, the data port 1814 includes or couples to one or more cache memories (e.g., data cache 1812) to cache data for memory access via the data port.

[0472] In one embodiment, the execution logic 1800 can also include a ray tracer 1805 that can provide ray tracing acceleration functionality. The ray tracer 1805 can support a ray tracing instruction set that includes instructions/functions for ray generation. The ray tracing instruction set can be similar to or different from the ray-tracing instruction set supported by the ray tracing cores 245 in FIG. 11C.

[0473] FIG. 14B illustrates exemplary internal details of an execution unit 1808, according to embodiments. A graphics execution unit 1808 can include an instruction fetch unit 1837, a general register file array (GRF) 1824, an architectural register file array (ARF) 1826, a thread arbiter 1822, a send unit 1830, a branch unit 1832, a set of SIMD floating point units (FPUs) 1834, and in one embodiment a set of dedicated integer SIND ALUs 1835. The GRF 1824 and ARF 1826 includes the set of general register files and architecture register files associated with each simultaneous hardware thread that may be active in the graphics execution unit 1808. In one embodiment, per thread architectural state is maintained in the ARF 1826, while data used during thread execution is stored in the GRF 1824. The execution state of each thread, including the instruction pointers for each thread, can be held in thread-specific registers in the ARF 1826.

[0474] In one embodiment the graphics execution unit 1808 has an architecture that is a combination of Simultaneous Multi-Threading (SMT) and fine-grained Interleaved Multi-Threading (IMT). The architecture has a modular configuration that can be fine-tuned at design time based on a target number of simultaneous threads and number of registers per execution unit, where execution unit resources are divided across logic used to execute multiple simultaneous threads. The number of logical threads that may be executed by the graphics execution unit 1808 is not limited to the number of hardware threads, and multiple logical threads can be assigned to each hardware thread.

[0475] In one embodiment, the graphics execution unit 1808 can co-issue multiple instructions, which may each be different instructions. The thread arbiter 1822 of the graphics execution unit thread 1808 can dispatch the instructions to one of the send unit 1830, branch unit 1832, or SIMD FPU(s) 1834 for execution. Each execution thread can access 128 general-purpose registers within the GRF 1824, where each register can store 32 bytes, accessible as a SIMD 8-element vector of 32-bit data elements. In one embodiment, each execution unit thread has access to 4 Kbytes within the GRF 1824, although embodiments are not so limited, and greater or fewer register resources may be provided in other embodiments. In one embodiment the graphics execution unit 1808 is partitioned into seven hardware threads that can independently perform computational operations, although the number of threads per execution unit can also vary according to embodiments. For example, in one embodiment up to 16 hardware threads are supported. In an embodiment in which seven threads may access 4 Kbytes, the GRF 1824 can store a total of 28 Kbytes. Where 16 threads may access 4 Kbytes, the GRF 1824 can store a total of 64 Kbytes. Flexible addressing modes can permit registers to be addressed together to build effectively wider registers or to represent strided rectangular block data structures.

[0476] In one embodiment, memory operations, sampler operations, and other longer-latency system communications are dispatched via "send" instructions that are executed by the message passing send unit 1830. In one embodiment, branch instructions are dispatched to a dedicated branch unit 1832 to facilitate SIMD divergence and eventual convergence.

[0477] In one embodiment the graphics execution unit 1808 includes one or more SIMD floating point units (FPU(s)) 1834 to perform floating-point operations. In one embodiment, the FPU(s) 1834 also support integer computation. In one embodiment the FPU(s) 1834 can SIMD execute up to M number of 32-bit floating-point (or integer) operations, or SIMD execute up to 2M 16-bit integer or 16-bit floating-point operations. In one embodiment, at least one of the FPU(s) provides extended math capability to support high-throughput transcendental math functions and double precision 54-bit floating-point. In some embodiments, a set of 8-bit integer SIMD ALUs 1835 are also present, and may be specifically optimized to perform operations associated with machine learning computations.

[0478] In one embodiment, arrays of multiple instances of the graphics execution unit 1808 can be instantiated in a graphics sub-core grouping (e.g., a sub-slice). For scalability, product architects can choose the exact number of execution units per sub-core grouping. In one embodiment the execution unit 1808 can execute instructions across a plurality of execution channels. In a further embodiment, each thread executed on the graphics execution unit 1808 is executed on a different channel.

[0479] FIG. 15 illustrates an additional execution unit 1900, according to an embodiment. The execution unit 1900 may be a compute-optimized execution unit for use in, for example, a compute engine tile 340A-340D as in FIG. 12C, but is not limited as such. Variants of the execution unit 1900 may also be used in a graphics engine tile 310A-310D as in FIG. 12B. In one embodiment, the execution unit 1900 includes a thread control unit 1901, a thread state unit 1902, an instruction fetch/prefetch unit 1903, and an instruction decode unit 1904. The execution unit 1900 additionally includes a register file 1906 that stores registers that can be assigned to hardware threads within the execution unit. The execution unit 1900 additionally includes a send unit 1907 and a branch unit 1908. In one embodiment, the send unit 1907 and branch unit 1908 can operate similarly as the send unit 1830 and a branch unit 1832 of the graphics execution unit 1808 of FIG. 14B.

[0480] The execution unit 1900 also includes a compute unit 1910 that includes multiple different types of functional units. In one embodiment the compute unit 1910 includes an ALU unit 1911 that includes an array of arithmetic logic units. The ALU unit 1911 can be configured to perform 64-bit, 32-bit, and 16-bit integer and floating point operations. Integer and floating point operations may be performed simultaneously. The compute unit 1910 can also include a systolic array 1912, and a math unit 1913. The systolic array 1912 includes a W wide and D deep network of data processing units that can be used to perform vector or other data-parallel operations in a systolic manner. In one embodiment the systolic array 1912 can be configured to perform matrix operations, such as matrix dot product operations. In one embodiment the systolic array 1912 support 16-bit floating point operations, as well as 8-bit and 4-bit integer operations. In one embodiment the systolic array 1912 can be configured to accelerate machine learning operations. In such embodiments, the systolic array 1912 can be configured with support for the bfloat 16-bit floating point format. In one embodiment, a math unit 1913 can be included to perform a specific subset of mathematical operations in an efficient and lower-power manner than then ALU unit 1911. The math unit 1913 can include a variant of math logic that may be found in shared function logic of a graphics processing engine provided by other embodiments (e.g., math logic 422 of the shared function logic 420 of FIG. 13). In one embodiment the math unit 1913 can be configured to perform 32-bit and 64-bit floating point operations.

[0481] The thread control unit 1901 includes logic to control the execution of threads within the execution unit. The thread control unit 1901 can include thread arbitration logic to start, stop, and preempt execution of threads within the execution unit 1900. The thread state unit 1902 can be used to store thread state for threads assigned to execute on the execution unit 1900. Storing the thread state within the execution unit 1900 enables the rapid pre-emption of threads when those threads become blocked or idle. The instruction fetch/prefetch unit 1903 can fetch instructions from an instruction cache of higher level execution logic (e.g., instruction cache 1806 as in FIG. 14A). The instruction fetch/prefetch unit 1903 can also issue prefetch requests for instructions to be loaded into the instruction cache based on an analysis of currently executing threads. The instruction decode unit 1904 can be used to decode instructions to be executed by the compute units. In one embodiment, the instruction decode unit 1904 can be used as a secondary decoder to decode complex instructions into constituent micro-operations.

[0482] The execution unit 1900 additionally includes a register file 1906 that can be used by hardware threads executing on the execution unit 1900. Registers in the register file 1906 can be divided across the logic used to execute multiple simultaneous threads within the compute unit 1910 of the execution unit 1900. The number of logical threads that may be executed by the graphics execution unit 1900 is not limited to the number of hardware threads, and multiple logical threads can be assigned to each hardware thread. The size of the register file 1906 can vary across embodiments based on the number of supported hardware threads. In one embodiment, register renaming may be used to dynamically allocate registers to hardware threads.

[0483] FIG. 16 is a block diagram illustrating a graphics processor instruction formats 2000 according to some embodiments. In one or more embodiment, the graphics processor execution units support an instruction set having instructions in multiple formats. The solid lined boxes illustrate the components that are generally included in an execution unit instruction, while the dashed lines include components that are optional or that are only included in a sub-set of the instructions. In some embodiments, instruction format 2000 described and illustrated are macro-instructions, in that they are instructions supplied to the execution unit, as opposed to micro-operations resulting from instruction decode once the instruction is processed.

[0484] In some embodiments, the graphics processor execution units natively support instructions in a 128-bit instruction format 2010. A 64-bit compacted instruction format 2030 is available for some instructions based on the selected instruction, instruction options, and number of operands. The native 128-bit instruction format 2010 provides access to all instruction options, while some options and operations are restricted in the 64-bit format 2030. The native instructions available in the 64-bit format 2030 vary by embodiment. In some embodiments, the instruction is compacted in part using a set of index values in an index field 2013. The execution unit hardware references a set of compaction tables based on the index values and uses the compaction table outputs to reconstruct a native instruction in the 128-bit instruction format 2010. Other sizes and formats of instruction can be used.

[0485] For each format, instruction opcode 2012 defines the operation that the execution unit is to perform. The execution units execute each instruction in parallel across the multiple data elements of each operand. For example, in response to an add instruction the execution unit performs a simultaneous add operation across each color channel representing a texture element or picture element. By default, the execution unit performs each instruction across all data channels of the operands. In some embodiments, instruction control field 2014 enables control over certain execution options, such as channels selection (e.g., predication) and data channel order (e.g., swizzle). For instructions in the 128-bit instruction format 2010 an exec-size field 2016 limits the number of data channels that will be executed in parallel. In some embodiments, exec-size field 2016 is not available for use in the 64-bit compact instruction format 2030.

[0486] Some execution unit instructions have up to three operands including two source operands, src0 2020, src1 2022, and one destination 2018. In some embodiments, the execution units support dual destination instructions, where one of the destinations is implied. Data manipulation instructions can have a third source operand (e.g., SRC2 2024), where the instruction opcode 2012 determines the number of source operands. An instruction's last source operand can be an immediate (e.g., hard-coded) value passed with the instruction.

[0487] In some embodiments, the 128-bit instruction format 2010 includes an access/address mode field 2026 specifying, for example, whether direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is directly provided by bits in the instruction.

[0488] In some embodiments, the 128-bit instruction format 2010 includes an access/address mode field 2026, which specifies an address mode and/or an access mode for the instruction. In one embodiment the access mode is used to define a data access alignment for the instruction. Some embodiments support access modes including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, the instruction may use byte-aligned addressing for source and destination operands and when in a second mode, the instruction may use 16-byte-aligned addressing for all source and destination operands.

[0489] In one embodiment, the address mode portion of the access/address mode field 2026 determines whether the instruction is to use direct or indirect addressing. When direct register addressing mode is used bits in the instruction directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands may be computed based on an address register value and an address immediate field in the instruction.

[0490] In some embodiments instructions are grouped based on opcode 2012 bit-fields to simplify Opcode decode 2040. For an 8-bit opcode, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely an example. In some embodiments, a move and logic opcode group 2042 includes data movement and logic instructions (e.g., move (mov), compare (cmp)). In some embodiments, move and logic group 2042 shares the five most significant bits (MSB), where move (mov) instructions are in the form of 0000xxxxb and logic instructions are in the form of 0001xxxxb. A flow control instruction group 2044 (e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). A miscellaneous instruction group 2046 includes a mix of instructions, including synchronization instructions (e.g., wait, send) in the form of 0011xxxxb (e.g., 0x30). A parallel math instruction group 2048 includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math group 2048 performs the arithmetic operations in parallel across data channels. The vector math group 2050 includes arithmetic instructions (e.g., dp4) in the form of 0101xxxxb (e.g., 0x50). The vector math group performs arithmetic such as dot product calculations on vector operands. The illustrated opcode decode 2040, in one embodiment, can be used to determine which portion of an execution unit will be used to execute a decoded instruction. For example, some instructions may be designated as systolic instructions that will be performed by a systolic array. Other instructions, such as ray-tracing instructions (not shown) can be routed to a ray-tracing core or ray-tracing logic within a slice or partition of execution logic.

[0491] Graphics Pipeline

[0492] FIG. 17 is a block diagram of another embodiment of a graphics processor 2100. Elements of FIG. 17 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.

[0493] In some embodiments, graphics processor 2100 includes a geometry pipeline 2120, a media pipeline 2130, a display engine 2140, thread execution logic 2150, and a render output pipeline 2170. In some embodiments, graphics processor 2100 is a graphics processor within a multi-core processing system that includes one or more general-purpose processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown) or via commands issued to graphics processor 2100 via a ring interconnect 2102. In some embodiments, ring interconnect 2102 couples graphics processor 2100 to other processing components, such as other graphics processors or general-purpose processors. Commands from ring interconnect 2102 are interpreted by a command streamer 2103, which supplies instructions to individual components of the geometry pipeline 2120 or the media pipeline 2130. In some embodiments, the graphics processor 2100 may implement aspects of the environment 700 (FIG. 8A), architecture 759 (FIG. 8B), process 760 (FIG. 8C), and method 780 (FIG. 8D).

[0494] In some embodiments, command streamer 2103 directs the operation of a vertex fetcher 2105 that reads vertex data from memory and executes vertex-processing commands provided by command streamer 2103. In some embodiments, vertex fetcher 2105 provides vertex data to a vertex shader 2107, which performs coordinate space transformation and lighting operations to each vertex. In some embodiments, vertex fetcher 2105 and vertex shader 2107 execute vertex-processing instructions by dispatching execution threads to execution units 2152A-2152B via a thread dispatcher 2131.

[0495] In some embodiments, execution units 2152A-2152B are an array of vector processors having an instruction set for performing graphics and media operations. In some embodiments, execution units 2152A-2152B have an attached L1 cache 2151 that is specific for each array or shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions.

[0496] In some embodiments, geometry pipeline 2120 includes tessellation components to perform hardware-accelerated tessellation of 3D objects. In some embodiments, a programmable hull shader 2111 configures the tessellation operations. A programmable domain shader 2117 provides back-end evaluation of tessellation output. A tessellator 2113 operates at the direction of hull shader 2111 and contains special purpose logic to generate a set of detailed geometric objects based on a coarse geometric model that is provided as input to geometry pipeline 2120. In some embodiments, if tessellation is not used, tessellation components (e.g., hull shader 2111, tessellator 2113, and domain shader 2117) can be bypassed.

[0497] In some embodiments, complete geometric objects can be processed by a geometry shader 2119 via one or more threads dispatched to execution units 2152A-2152B, or can proceed directly to the clipper 2129. In some embodiments, the geometry shader operates on entire geometric objects, rather than vertices or patches of vertices as in previous stages of the graphics pipeline. If the tessellation is disabled the geometry shader 2119 receives input from the vertex shader 2107. In some embodiments, geometry shader 2119 is programmable by a geometry shader program to perform geometry tessellation if the tessellation units are disabled.

[0498] Before rasterization, a clipper 2129 processes vertex data. The clipper 2129 may be a fixed function clipper or a programmable clipper having clipping and geometry shader functions. In some embodiments, a rasterizer and depth test component 2173 in the render output pipeline 2170 dispatches pixel shaders to convert the geometric objects into per pixel representations. In some embodiments, pixel shader logic is included in thread execution logic 2150. In some embodiments, an application can bypass the rasterizer and depth test component 2173 and access un-rasterized vertex data via a stream out unit 2123.

[0499] The graphics processor 2100 has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and message passing amongst the major components of the processor. In some embodiments, execution units 2152A-2152B and associated logic units (e.g., L1 cache 2151, sampler 2154, texture cache 2158, etc.) interconnect via a data port 2156 to perform memory access and communicate with render output pipeline components of the processor. In some embodiments, sampler 2154, caches 2151, 2158 and execution units 2152A-2152B each have separate memory access paths. In one embodiment the texture cache 2158 can also be configured as a sampler cache.

[0500] In some embodiments, render output pipeline 2170 contains a rasterizer and depth test component 2173 that converts vertex-based objects into an associated pixel-based representation. In some embodiments, the rasterizer logic includes a windower/masker unit to perform fixed function triangle and line rasterization. An associated render cache 2178 and depth cache 2179 are also available in some embodiments. A pixel operations component 2177 performs pixel-based operations on the data, though in some instances, pixel operations associated with 2D operations (e.g., bit block image transfers with blending) are performed by the 2D engine 2141, or substituted at display time by the display controller 2143 using overlay display planes. In some embodiments, a shared L3 cache 2175 is available to all graphics components, allowing the sharing of data without the use of main system memory.

[0501] In some embodiments, graphics processor media pipeline 2130 includes a media engine 2137 and a video front-end 2134. In some embodiments, video front-end 2134 receives pipeline commands from the command streamer 2103. In some embodiments, media pipeline 2130 includes a separate command streamer. In some embodiments, video front-end 2134 processes media commands before sending the command to the media engine 2137. In some embodiments, media engine 2137 includes thread spawning functionality to spawn threads for dispatch to thread execution logic 2150 via thread dispatcher 2131.

[0502] In some embodiments, graphics processor 2100 includes a display engine 2140. In some embodiments, display engine 2140 is external to processor 2100 and couples with the graphics processor via the ring interconnect 2102, or some other interconnect bus or fabric. In some embodiments, display engine 2140 includes a 2D engine 2141 and a display controller 2143. In some embodiments, display engine 2140 contains special purpose logic capable of operating independently of the 3D pipeline. In some embodiments, display controller 2143 couples with a display device (not shown), which may be a system integrated display device, as in a laptop computer, or an external display device attached via a display device connector.

[0503] In some embodiments, the geometry pipeline 2120 and media pipeline 2130 are configurable to perform operations based on multiple graphics and media programming interfaces and are not specific to any one application programming interface (API). In some embodiments, driver software for the graphics processor translates API calls that are specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for the Open Graphics Library (OpenGL), Open Computing Language (OpenCL), and/or Vulkan graphics and compute API, all from the Khronos Group. In some embodiments, support may also be provided for the Direct3D library from the Microsoft Corporation. In some embodiments, a combination of these libraries may be supported. Support may also be provided for the Open Source Computer Vision Library (OpenCV). A future API with a compatible 3D pipeline would also be supported if a mapping can be made from the pipeline of the future API to the pipeline of the graphics processor.

[0504] Graphics Pipeline Programming

[0505] FIG. 18A is a block diagram illustrating a graphics processor command format 2200 according to some embodiments. FIG. 18B is a block diagram illustrating a graphics processor command sequence 2210 according to an embodiment. The solid lined boxes in FIG. 18A illustrate the components that are generally included in a graphics command while the dashed lines include components that are optional or that are only included in a sub-set of the graphics commands. The exemplary graphics processor command format 2200 of FIG. 18A includes data fields to identify a client 2202, a command operation code (opcode) 2204, and data 2206 for the command. A sub-opcode 2205 and a command size 2208 are also included in some commands.

[0506] In some embodiments, client 2202 specifies the client unit of the graphics device that processes the command data. In some embodiments, a graphics processor command parser examines the client field of each command to condition the further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client units include a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline that processes the commands. Once the command is received by the client unit, the client unit reads the opcode 2204 and, if present, sub-opcode 2205 to determine the operation to perform. The client unit performs the command using information in data field 2206. For some commands an explicit command size 2208 is expected to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments commands are aligned via multiples of a double word. Other command formats can be used.

[0507] The flow diagram in FIG. 18B illustrates an exemplary graphics processor command sequence 2210. In some embodiments, software or firmware of a data processing system that features an embodiment of a graphics processor uses a version of the command sequence shown to set up, execute, and terminate a set of graphics operations. A sample command sequence is shown and described for purposes of example only as embodiments are not limited to these specific commands or to this command sequence. Moreover, the commands may be issued as batch of commands in a command sequence, such that the graphics processor will process the sequence of commands in at least partially concurrence.

[0508] In some embodiments, the graphics processor command sequence 2210 may begin with a pipeline flush command 2212 to cause any active graphics pipeline to complete the currently pending commands for the pipeline. In some embodiments, the 3D pipeline 2222 and the media pipeline 2224 do not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. Optionally, any data in the render cache that is marked `dirty` can be flushed to memory. In some embodiments, pipeline flush command 2212 can be used for pipeline synchronization or before placing the graphics processor into a low power state.

[0509] In some embodiments, a pipeline select command 2213 is used when a command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, a pipeline select command 2213 is required only once within an execution context before issuing pipeline commands unless the context is to issue commands for both pipelines. In some embodiments, a pipeline flush command 2212 is required immediately before a pipeline switch via the pipeline select command 2213.

[0510] In some embodiments, a pipeline control command 2214 configures a graphics pipeline for operation and is used to program the 3D pipeline 2222 and the media pipeline 2224. In some embodiments, pipeline control command 2214 configures the pipeline state for the active pipeline. In one embodiment, the pipeline control command 2214 is used for pipeline synchronization and to clear data from one or more cache memories within the active pipeline before processing a batch of commands.

[0511] In some embodiments, return buffer state commands 2216 are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, the return buffer state commands 2216 select the size and number of return buffers to use for a set of pipeline operations.

[0512] The remaining commands in the command sequence differ based on the active pipeline for operations. Based on a pipeline determination 2220, the command sequence is tailored to the 3D pipeline 2222 beginning with the 3D pipeline state 2230 or the media pipeline 2224 beginning at the media pipeline state 2240.

[0513] The commands to configure the 3D pipeline state 2230 include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that are to be configured before 3D primitive commands are processed. The values of these commands are determined at least in part based on the particular 3D API in use. In some embodiments, 3D pipeline state 2230 commands are also able to selectively disable or bypass certain pipeline elements if those elements will not be used.

[0514] In some embodiments, 3D primitive 2232 command is used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters that are passed to the graphics processor via the 3D primitive 2232 command are forwarded to the vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive 2232 command data to generate vertex data structures. The vertex data structures are stored in one or more return buffers. In some embodiments, 3D primitive 2232 command is used to perform vertex operations on 3D primitives via vertex shaders. To process vertex shaders, 3D pipeline 2222 dispatches shader execution threads to graphics processor execution units.

[0515] In some embodiments, 3D pipeline 2222 is triggered via an execute 2234 command or event. In some embodiments, a register write triggers command execution. In some embodiments execution is triggered via a `go` or `kick` command in the command sequence. In one embodiment, command execution is triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometry processing for the 3D primitives. Once operations are complete, the resulting geometric objects are rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for those operations.

[0516] In some embodiments, the graphics processor command sequence 2210 follows the media pipeline 2224 path when performing media operations. In general, the specific use and manner of programming for the media pipeline 2224 depends on the media or compute operations to be performed. Specific media decode operations may be offloaded to the media pipeline during media decode. In some embodiments, the media pipeline can also be bypassed and media decode can be performed in whole or in part using resources provided by one or more general-purpose processing cores. In one embodiment, the media pipeline also includes elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations using computational shader programs that are not explicitly related to the rendering of graphics primitives.

[0517] In some embodiments, media pipeline 2224 is configured in a similar manner as the 3D pipeline 2222. A set of commands to configure the media pipeline state 2240 are dispatched or placed into a command queue before the media object commands 2242. In some embodiments, commands for the media pipeline state 2240 include data to configure the media pipeline elements that will be used to process the media objects. This includes data to configure the video decode and video encode logic within the media pipeline, such as encode or decode format. In some embodiments, commands for the media pipeline state 2240 also support the use of one or more pointers to "indirect" state elements that contain a batch of state settings.

[0518] In some embodiments, media object commands 2242 supply pointers to media objects for processing by the media pipeline. The media objects include memory buffers containing video data to be processed. In some embodiments, all media pipeline states must be valid before issuing a media object command 2242. Once the pipeline state is configured and media object commands 2242 are queued, the media pipeline 2224 is triggered via an execute command 2244 or an equivalent execute event (e.g., register write). Output from media pipeline 2224 may then be post processed by operations provided by the 3D pipeline 2222 or the media pipeline 2224. In some embodiments, GPGPU operations are configured and executed in a similar manner as media operations.

[0519] Graphics Software Architecture

[0520] FIG. 19 illustrates an exemplary graphics software architecture for a data processing system 2300 according to some embodiments. In some embodiments, software architecture includes a 3D graphics application 2310, an operating system 2320, and at least one processor 2330. In some embodiments, processor 2330 includes a graphics processor 2332 and one or more general-purpose processor core(s) 2334. The graphics application 2310 and operating system 2320 each execute in the system memory 2350 of the data processing system.

[0521] In some embodiments, 3D graphics application 2310 contains one or more shader programs including shader instructions 2312. The shader language instructions may be in a high-level shader language, such as the High-Level Shader Language (HLSL) of Direct3D, the OpenGL Shader Language (GLSL), and so forth. The application also includes executable instructions 2314 in a machine language suitable for execution by the general-purpose processor core 2334. The application also includes graphics objects 2316 defined by vertex data.

[0522] In some embodiments, operating system 2320 is a Microsoft.RTM. Windows.RTM. operating system from the Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. The operating system 2320 can support a graphics API 2322 such as the Direct3D API, the OpenGL API, or the Vulkan API. When the Direct3D API is in use, the operating system 2320 uses a front-end shader compiler 2324 to compile any shader instructions 2312 in HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT) compilation or the application can perform shader pre-compilation. In some embodiments, high-level shaders are compiled into low-level shaders during the compilation of the 3D graphics application 2310. In some embodiments, the shader instructions 2312 are provided in an intermediate form, such as a version of the Standard Portable Intermediate Representation (SPIR) used by the Vulkan API.

[0523] In some embodiments, user mode graphics driver 2326 contains a back-end shader compiler 2327 to convert the shader instructions 2312 into a hardware specific representation. When the OpenGL API is in use, shader instructions 2312 in the GLSL high-level language are passed to a user mode graphics driver 2326 for compilation. In some embodiments, user mode graphics driver 2326 uses operating system kernel mode functions 2328 to communicate with a kernel mode graphics driver 2329. In some embodiments, kernel mode graphics driver 2329 communicates with graphics processor 2332 to dispatch commands and instructions.

[0524] IP Core Implementations

[0525] One or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as "IP cores," are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein.

[0526] FIG. 20A is a block diagram illustrating an IP core development system 2400 that may be used to manufacture an integrated circuit to perform operations according to an embodiment. The IP core development system 2400 may be used to generate modular, re-usable designs that can be incorporated into a larger design or used to construct an entire integrated circuit (e.g., an SOC integrated circuit). A design facility 2430 can generate a software simulation 2410 of an IP core design in a high-level programming language (e.g., C/C++). The software simulation 2410 can be used to design, test, and verify the behavior of the IP core using a simulation model 2412. The simulation model 2412 may include functional, behavioral, and/or timing simulations. A register transfer level (RTL) design 2415 can then be created or synthesized from the simulation model 2412. The RTL design 2415 is an abstraction of the behavior of the integrated circuit that models the flow of digital signals between hardware registers, including the associated logic performed using the modeled digital signals. In addition to an RTL design 2415, lower-level designs at the logic level or transistor level may also be created, designed, or synthesized. Thus, the particular details of the initial design and simulation may vary.

[0527] The RTL design 2415 or equivalent may be further synthesized by the design facility into a hardware model 2420, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a 3rd party fabrication facility 2465 using non-volatile memory 2440 (e.g., hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted (e.g., via the Internet) over a wired connection 2450 or wireless connection 2460. The fabrication facility 2465 may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein. Some embodiments may generate an IP core design for aspects of the architecture 1150 (FIG. 7A), method 1190 (FIG. 7B), method 810 (FIG. 7C), and/or the method 840 (FIG. 7D) already discussed. Some embodiments may further relate to performance enhanced computing architecture 3400 (FIG. 6G), entry 3402 (FIG. 6H), and method 3500 (FIG. 6I) already discussed.

[0528] FIG. 20B illustrates a cross-section side view of an integrated circuit package assembly 2470, according to some embodiments described herein. The integrated circuit package assembly 2470 may implement aspects of the architecture 1150 (FIG. 7A), method 1190 (FIG. 7B), method 810 (FIG. 7C), and/or the method 840 (FIG. 7D) already discussed, and further to include a CCE (FIG. 7A). The integrated circuit package assembly 2470 illustrates an implementation of one or more processor or accelerator devices as described herein. The package assembly 2470 includes multiple units of hardware logic 2472, 2474 connected to a substrate 2480. The logic 2472, 2474 may be implemented at least partly in configurable logic or fixed-functionality logic hardware, and can include one or more portions of any of the processor core(s), graphics processor(s), or other accelerator devices described herein. Each unit of logic 2472, 2474 can be implemented within a semiconductor die and coupled with the substrate 2480 via an interconnect structure 2473. The interconnect structure 2473 may be configured to route electrical signals between the logic 2472, 2474 and the substrate 2480, and can include interconnects such as, but not limited to bumps or pillars. In some embodiments, the interconnect structure 2473 may be configured to route electrical signals such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of the logic 2472, 2474. In some embodiments, the substrate 2480 is an epoxy-based laminate substrate. The substrate 2480 may include other suitable types of substrates in other embodiments. The package assembly 2470 can be connected to other electrical devices via a package interconnect 2483. The package interconnect 2483 may be coupled to a surface of the substrate 2480 to route electrical signals to other electrical devices, such as a motherboard, other chipset, or multi-chip module.

[0529] In some embodiments, the units of logic 2472, 2474 are electrically coupled with a bridge 2482 that is configured to route electrical signals between the logic 2472, 2474. The bridge 2482 may be a dense interconnect structure that provides a route for electrical signals. The bridge 2482 may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide a chip-to-chip connection between the logic 2472, 2474.

[0530] Although two units of logic 2472, 2474 and a bridge 2482 are illustrated, embodiments described herein may include more or fewer logic units on one or more dies. The one or more dies may be connected by zero or more bridges, as the bridge 2482 may be excluded when the logic is included on a single die. Alternatively, multiple dies or units of logic can be connected by one or more bridges. Additionally, multiple logic units, dies, and bridges can be connected together in other possible configurations, including three-dimensional configurations.

[0531] FIG. 20C illustrates a package assembly 2490 that includes multiple units of hardware logic chiplets connected to a substrate 2480 (e.g., base die). A graphics processing unit, parallel processor, and/or compute accelerator as described herein can be composed from diverse silicon chiplets that are separately manufactured. In this context, a chiplet is an at least partially packaged integrated circuit that includes distinct units of logic that can be assembled with other chiplets into a larger package. A diverse set of chiplets with different IP core logic can be assembled into a single device. Additionally the chiplets can be integrated into a base die or base chiplet using active interposer technology. The concepts described herein enable the interconnection and communication between the different forms of IP within the GPU. IP cores can be manufactured using different process technologies and composed during manufacturing, which avoids the complexity of converging multiple IPs, especially on a large SoC with several flavors IPs, to the same manufacturing process. Enabling the use of multiple process technologies improves the time to market and provides a cost-effective way to create multiple product SKUs. Additionally, the disaggregated IPs are more amenable to being power gated independently, components that are not in use on a given workload can be powered off, reducing overall power consumption.

[0532] The hardware logic chiplets can include special purpose hardware logic chiplets 2472, logic or I/O chiplets 2474, and/or memory chiplets 2475. The hardware logic chiplets 2472 and logic or I/O chiplets 2474 may be implemented at least partly in configurable logic or fixed-functionality logic hardware and can include one or more portions of any of the processor core(s), graphics processor(s), parallel processors, or other accelerator devices described herein. The memory chiplets 2475 can be DRAM (e.g., GDDR, HBM) memory or cache (SRAM) memory.

[0533] Each chiplet can be fabricated as separate semiconductor die and coupled with the substrate 2480 via an interconnect structure 2473. The interconnect structure 2473 may be configured to route electrical signals between the various chiplets and logic within the substrate 2480. The interconnect structure 2473 can include interconnects such as, but not limited to bumps or pillars. In some embodiments, the interconnect structure 2473 may be configured to route electrical signals such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of the logic, I/O and memory chiplets.

[0534] In some embodiments, the substrate 2480 is an epoxy-based laminate substrate. The substrate 2480 may include other suitable types of substrates in other embodiments. The package assembly 2490 can be connected to other electrical devices via a package interconnect 2483. The package interconnect 2483 may be coupled to a surface of the substrate 2480 to route electrical signals to other electrical devices, such as a motherboard, other chipset, or multi-chip module.

[0535] In some embodiments, a logic or I/O chiplet 2474 and a memory chiplet 2475 can be electrically coupled via a bridge 2487 that is configured to route electrical signals between the logic or I/O chiplet 2474 and a memory chiplet 2475. The bridge 2487 may be a dense interconnect structure that provides a route for electrical signals. The bridge 2487 may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide a chip-to-chip connection between the logic or I/O chiplet 2474 and a memory chiplet 2475. The bridge 2487 may also be referred to as a silicon bridge or an interconnect bridge. For example, the bridge 2487, in some embodiments, is an Embedded Multi-die Interconnect Bridge (EMIB). In some embodiments, the bridge 2487 may simply be a direct connection from one chiplet to another chiplet.

[0536] The substrate 2480 can include hardware components for I/O 2491, cache memory 2492, and other hardware logic 2493. A fabric 2485 can be embedded in the substrate 2480 to enable communication between the various logic chiplets and the logic 2491, 2493 within the substrate 2480. In one embodiment, the I/O 2491, fabric 2485, cache, bridge, and other hardware logic 2493 can be integrated into a base die that is layered on top of the substrate 2480.

[0537] In various embodiments a package assembly 2490 can include fewer or greater number of components and chiplets that are interconnected by a fabric 2485 or one or more bridges 2487. The chiplets within the package assembly 2490 may be arranged in a 3D or 2.5D arrangement. In general, bridge structures 2487 may be used to facilitate a point to point interconnect between, for example, logic or I/O chiplets and memory chiplets. The fabric 2485 can be used to interconnect the various logic and/or I/O chiplets (e.g., chiplets 2472, 2474, 2491, 2493). with other logic and/or I/O chiplets. In one embodiment, the cache memory 2492 within the substrate can act as a global cache for the package assembly 2490, part of a distributed global cache, or as a dedicated cache for the fabric 2485.

[0538] FIG. 20D illustrates a package assembly 2494 including interchangeable chiplets 2495, according to an embodiment. The interchangeable chiplets 2495 can be assembled into standardized slots on one or more base chiplets 2496, 2498. The base chiplets 2496, 2498 can be coupled via a bridge interconnect 2497, which can be similar to the other bridge interconnects described herein and may be, for example, an EMIB. Memory chiplets can also be connected to logic or I/O chiplets via a bridge interconnect. I/O and logic chiplets can communicate via an interconnect fabric. The base chiplets can each support one or more slots in a standardized format for one of logic or I/O or memory/cache.

[0539] In one embodiment, SRAM and power delivery circuits can be fabricated into one or more of the base chiplets 2496, 2498, which can be fabricated using a different process technology relative to the interchangeable chiplets 2495 that are stacked on top of the base chiplets. For example, the base chiplets 2496, 2498 can be fabricated using a larger process technology, while the interchangeable chiplets can be manufactured using a smaller process technology. One or more of the interchangeable chiplets 2495 may be memory (e.g., DRAM) chiplets. Different memory densities can be selected for the package assembly 2494 based on the power, and/or performance targeted for the product that uses the package assembly 2494. Additionally, logic chiplets with a different number of type of functional units can be selected at time of assembly based on the power, and/or performance targeted for the product. Additionally, chiplets containing IP logic cores of differing types can be inserted into the interchangeable chiplet slots, enabling hybrid processor designs that can mix and match different technology IP blocks.

[0540] Exemplary System on a Chip Integrated Circuit

[0541] FIGS. 21-22B illustrate exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores, according to various embodiments described herein. In addition to what is illustrated, other logic and circuits may be included, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores.

[0542] FIG. 21 is a block diagram illustrating an exemplary system on a chip integrated circuit 1200 that may be fabricated using one or more IP cores, according to an embodiment. Exemplary integrated circuit 1200 includes one or more application processor(s) 1205 (e.g., CPUs), at least one graphics processor 1210, and may additionally include an image processor 1215 and/or a video processor 1220, any of which may be a modular IP core from the same or multiple different design facilities. Integrated circuit 1200 includes peripheral or bus logic including a USB controller 1225, UART controller 1230, an SPI/SDIO controller 1235, and an I2S/I2C controller 1240. Additionally, the integrated circuit can include a display device 1245 coupled to one or more of a high-definition multimedia interface (HDMI) controller 1250 and a mobile industry processor interface (MIPI) display interface 1255. Storage may be provided by a flash memory subsystem 1260 including flash memory and a flash memory controller. Memory interface may be provided via a memory controller 1265 for access to SDRAM or SRAM memory devices. Some integrated circuits additionally include an embedded security engine 1270.

[0543] FIGS. 22A-22B are block diagrams illustrating exemplary graphics processors for use within an SoC, according to embodiments described herein. FIG. 22A illustrates an exemplary graphics processor 1310 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. FIG. 22B illustrates an additional exemplary graphics processor 1340 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor 1310 of FIG. 22A is an example of a low power graphics processor core. Graphics processor 1340 of FIG. 22B is an example of a higher performance graphics processor core. Each of the graphics processors 1310, 1340 can be variants of the graphics processor 1210 of FIG. 21.

[0544] FIGS. 22A-22B are block diagrams illustrating exemplary graphics processors for use within an SoC, according to embodiments described herein. FIG. 22A illustrates an exemplary graphics processor 2610 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. FIG. 22B illustrates an additional exemplary graphics processor 2640 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor 2610 of FIG. 22A is an example of a low power graphics processor core. Graphics processor 2640 of FIG. 22B is an example of a higher performance graphics processor core. Each of the graphics processors 2610, 2640 can be variants of the graphics processor 2510 of FIG. 21.

[0545] As shown in FIG. 22A, graphics processor 2610 includes a vertex processor 2605 and one or more fragment processor(s) 2615A-2615N (e.g., 2615A, 2615B, 2615C, 2615D, through 2615N-1, and 2615N). Graphics processor 2610 can execute different shader programs via separate logic, such that the vertex processor 2605 is optimized to execute operations for vertex shader programs, while the one or more fragment processor(s) 2615A-2615N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. The vertex processor 2605 performs the vertex processing stage of the 3D graphics pipeline and generates primitives and vertex data. The fragment processor(s) 2615A-2615N use the primitive and vertex data generated by the vertex processor 2605 to produce a framebuffer that is displayed on a display device. In one embodiment, the fragment processor(s) 2615A-2615N are optimized to execute fragment shader programs as provided for in the OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in the Direct 3D API. In some embodiments, the GPU 2610 may operate similarly to the GPU 1152 (FIG. 7A).

[0546] Graphics processor 2610 additionally includes one or more memory management units (MMUs) 2620A-2620B, cache(s) 2625A-2625B, and circuit interconnect(s) 2630A-2630B. The one or more MMU(s) 2620A-2620B provide for virtual to physical address mapping for the graphics processor 2610, including for the vertex processor 2605 and/or fragment processor(s) 2615A-2615N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in the one or more cache(s) 2625A-2625B. In one embodiment the one or more MMU(s) 2620A-2620B may be synchronized with other MMUs within the system, including one or more MMUs associated with the one or more application processor(s) 2505, image processor 2515, and/or video processor 2520 of FIG. 21, such that each processor 2505-2520 can participate in a shared or unified virtual memory system. The one or more circuit interconnect(s) 2630A-2630B enable graphics processor 2610 to interface with other IP cores within the SoC, either via an internal bus of the SoC or via a direct connection, according to embodiments.

[0547] As shown FIG. 22B, graphics processor 2640 includes the one or more MMU(s) 2620A-2620B, cache(s) 2625A-2625B, and circuit interconnect(s) 2630A-2630B of the graphics processor 2610 of FIG. 22A. Graphics processor 2640 includes one or more shader core(s) 2655A-2655N (e.g., 2655A, 2655B, 2655C, 2655D, 2655E, 2655F, through 2655N-1, and 2655N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. The exact number of shader cores present can vary among embodiments and implementations. Additionally, graphics processor 2640 includes an inter-core task manager 2645, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores 2655A-2655N and a tiling unit 2658 to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches.

ADDITIONAL NOTES AND EXAMPLES

[0548] Example A1 includes a computing system comprising a graphics processor including a plurality of cores including lanes and encryption engines, wherein each of the lanes is to be associated with a different encryption engine of the encryption engines, a memory including a set of instructions, which when executed by the graphics processor, cause the computing system to process thread data with the lanes, and encrypt, with the encryption engines, the lanes according to a plurality of different encryption keys.

[0549] Example A2 includes the computing system of Example A1, wherein the instructions, when executed, cause the computing system to identify that a first thread is to be associated with a first context, identify a first key associated with the first context, and encrypt, with a first encryption engine of the encryption engines, first data associated with the first thread based on the first key.

[0550] Example A3 includes the computing system of Example A2, wherein the instructions, when executed, cause the computing system to identify that a second thread is to be associated with a second context, identify a second key associated with the second context, and encrypt, with a second encryption engine, second data associated with the second thread based on the second key concurrently with the encryption of the first data.

[0551] Example A4 includes the computing system of Example A1, wherein the instructions, when executed, cause the computing system to verify credentials of a thread, and assign a key to the thread based on the credentials.

[0552] Example A5 includes the computing system of Example A1, wherein the instructions, when executed, cause the computing system to concatenate data associated with a same encryption key, wherein the data is to originate from a plurality of the lanes.

[0553] Example A6 includes the computing system of any one of Examples A1 to A5, wherein the graphics processor is to be a single instruction, multiple data architecture.

[0554] Example A7 includes a semiconductor apparatus comprising one or more substrates, and logic coupled to the one or more substrates, wherein the logic is implemented in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to process thread data with lanes of a plurality of cores of a graphics processor, wherein the plurality of cores include encryption engines, wherein each of the lanes is to be associated with a different encryption engine of the encryption engines, and encrypt, with the encryption engines, the lanes according to a plurality of different encryption keys.

[0555] Example A8 includes the apparatus of Example A7, wherein the logic coupled to the one or more substrates is to identify that a first thread is to be associated with a first context, identify a first key associated with the first context, and encrypt, with a first encryption engine of the encryption engines, first data associated with the first thread based on the first key.

[0556] Example A9 includes the apparatus of Example A8, wherein the logic coupled to the one or more substrates is to identify that a second thread is to be associated with a second context, identify a second key associated with the second context, and encrypt, with a second encryption engine, second data associated with the second thread based on the second key concurrently with the encryption of the first data.

[0557] Example A10 includes the apparatus of Example A7, wherein the logic coupled to the one or more substrates is to verify credentials of a thread, and assign a key to the thread based on the credentials.

[0558] Example A11 includes the apparatus of Example A7, wherein the logic coupled to the one or more substrates is to concatenate data associated with a same encryption key, wherein the data is to originate from a plurality of the lanes.

[0559] Example A12 includes the apparatus of any one of Examples A7 to A11, wherein the graphics processor is to be a single instruction, multiple data architecture.

[0560] Example A13 includes the apparatus of any one of Examples A7 to A11, wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.

[0561] Example A14 includes At least one computer readable storage medium comprising a set of instructions, which when executed by a computing device, cause the computing device to process thread data with lanes of a plurality of cores of a graphics processor, wherein the plurality of cores include encryption engines, wherein each of the lanes is to be associated with a different encryption engine of the encryption engines, and encrypt, with the encryption engines, the lanes according to a plurality of different encryption keys.

[0562] Example A15 includes the at least one computer readable storage medium of Example A14, wherein the instructions, when executed, cause the computing device to identify that a first thread is to be associated with a first context, identify a first key associated with the first context, and encrypt, with a first encryption engine of the encryption engines, first data associated with the first thread based on the first key.

[0563] Example A16 the at least one computer readable storage medium of Example A15, wherein the instructions, when executed, cause the computing device to identify that a second thread is to be associated with a second context, identify a second key associated with the second context, and encrypt, with a second encryption engine, second data associated with the second thread based on the second key concurrently with the encryption of the first data.

[0564] Example A17 includes the at least one computer readable storage medium of Example A14, wherein the instructions, when executed, cause the computing device to verify credentials of a thread, and assign a key to the thread based on the credentials.

[0565] Example A18 includes the at least one computer readable storage medium of Example A14, wherein the instructions, when executed, cause the computing device to concatenate data associated with a same encryption key, wherein the data is to originate from a plurality of the lanes.

[0566] Example A19 includes the at least one computer readable storage medium of any one of Examples A14 to A18, wherein the graphics processor is to be a single instruction, multiple data architecture.

[0567] Example A20 includes a method comprising processing thread data with lanes of a plurality of cores of a graphics processor, wherein the plurality of cores include encryption engines, wherein each of the lanes is associated with a different encryption engine of the encryption engines, and encrypting, with the encryption engines, the lanes according to a plurality of different encryption keys.

[0568] Example A21 includes the method of Example A20, further comprising identifying that a first thread is associated with a first context, identifying a first key associated with the first context, and encrypting, with a first encryption engine of the encryption engines, first data associated with the first thread based on the first key.

[0569] Example A22 includes the method of Example A21, further comprising identifying that a second thread is associated with a second context, identifying a second key associated with the second context, and encrypting, with a second encryption engine, second data associated with the second thread based on the second key concurrently with the encryption of the first data.

[0570] Example A23 includes the method of Example A20, further comprising verifying credentials of a thread, and assigning a key to the thread based on the credentials.

[0571] Example A24 includes the method of Example A20, further comprising concatenating data associated with a same encryption key, wherein the data originates from a plurality of the lanes.

[0572] Example A25 includes the method of any one of Examples A20 to A24, wherein the graphics processor is a single instruction, multiple data architecture.

[0573] Example A26 includes an apparatus comprising means for performing the method of any one of Examples A20 to A25.

[0574] Example B1 includes a computing system comprising a graphics processor, a memory including a set of instructions, which when executed by the graphics processor, cause the computing system to identify a plurality of claims associated with a same content, wherein the plurality of claims are to originate from a plurality of sources, and determine an authenticity score for the content based on the plurality of claims.

[0575] Example B2 includes the computing system of Example B1, wherein each respective claim of the plurality of claims is to include an indication of whether the same content is authentic or fake.

[0576] Example B3 includes the computing system of Example B2, wherein one or more of the claims is to include an identification of a machine learning model that generated the indication.

[0577] Example B4 includes the computing system of Example B1, wherein one or more of the claims is to include a non-machine learning reproductive algorithm that is to reproduce the same content.

[0578] Example B5 includes the computing system of Example B1, wherein one or more of the claims is to include a machine learning reproductive algorithm that is to reproduce the same content.

[0579] Example B6 includes the computing system of any one of Examples B1 to B5, wherein the instructions, when executed, cause the computing system to execute a machine learning algorithm to determine the authenticity score.

[0580] Example B7 includes a semiconductor apparatus comprising one or more substrates, and logic coupled to the one or more substrates, wherein the logic is implemented in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to identify a plurality of claims associated with a same content, wherein the plurality of claims are to originate from a plurality of sources, and determine an authenticity score for the content based on the plurality of claims.

[0581] Example B8 includes the apparatus of Example B7, wherein each respective claim of the plurality of claims is to include an indication of whether the same content is authentic or fake.

[0582] Example B9 includes the apparatus of claim B8, wherein one or more of the claims is to include an identification of a machine learning model that generated the indication.

[0583] Example B10 includes the apparatus of Example B7, wherein one or more of the claims is to include a non-machine learning reproductive algorithm that is to reproduce the same content.

[0584] Example B11 includes the apparatus of Example B7, wherein one or more of the claims is to include a machine learning reproductive algorithm that is to reproduce the same content.

[0585] Example B12 includes the apparatus of any one of Examples B7 to B11, wherein the logic coupled to the one or more substrates is to execute a machine learning algorithm to determine the authenticity score.

[0586] Example B13 includes the apparatus of any one of Examples B7 to B11, wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.

[0587] Example B14 includes at least one computer readable storage medium comprising a set of instructions, which when executed by a computing device, cause the computing device to identify a plurality of claims associated with a same content, wherein the plurality of claims is to originate from a plurality of sources, and determine an authenticity score for the content based on the plurality of claims.

[0588] Example B15 includes the at least one computer readable storage medium of Example B14, wherein each respective claim of the plurality of claims is to include an indication of whether the same content is authentic or fake.

[0589] Example B16 includes the at least one computer readable storage medium of Example B15, wherein one or more of the claims is to include an identification of a machine learning model that generated the indication.

[0590] Example B17 includes the at least one computer readable storage medium of Example B14, wherein one or more of the claims is to include a non-machine learning reproductive algorithm that is to reproduce the same content.

[0591] Example B18 includes the at least one computer readable storage medium of Example B14, wherein one or more of the claims is to include a machine learning reproductive algorithm that is to reproduce the same content.

[0592] Example B19 includes the at least one computer readable storage medium of any one of Examples B14 to B18, wherein, wherein the instructions, when executed, cause the computing system to execute a machine learning algorithm to determine the authenticity score.

[0593] Example B20 includes a method comprising identifying a plurality of claims associated with a same content, wherein the plurality of claims originate from a plurality of sources, and determining an authenticity score for the content based on the plurality of claims.

[0594] Example B21 includes the method of Example B20, wherein each respective claim of the plurality of claims includes an indication of whether the same content is authentic or fake.

[0595] Example B22 includes the method of Example B21, wherein one or more of the claims includes an identification of a machine learning model that generated the indication.

[0596] Example B23 includes the method of Example B20, wherein one or more of the claims include a non-machine learning reproductive algorithm that reproduced the same content.

[0597] Example B24 includes the method of Example B20, wherein one or more of the claims includes a machine learning reproductive algorithm that reproduced the same content.

[0598] Example B25 includes the method of any one of Examples B20 to B24, further comprising executing a machine learning algorithm to determine the authenticity score.

[0599] Example B26 includes an apparatus comprising means for performing the method of any one of Examples B20 to B25.

[0600] Example C1 includes a computing system comprising a data storage, a host processor, a plurality of accelerators that are to be divided into a first trust domain and a second trust domain, wherein the plurality of accelerators are to include a graphics processor, and a converged cryptographic engine (CCE) implemented at least partly in one or more of configurable logic or fixed-functionality logic hardware, and a memory including a set of instructions, which when executed by one or more of the graphics processor or the host processor, cause the computing system to partition a plurality of encryption keys between the first trust domain and the second trust domain so that first encryption keys of the plurality of encryption keys are assigned to the first trust domain, and second encryption keys of the plurality of encryption keys are assigned to the second trust domain, and encrypt, with the CCE, data according to the first encryption keys or the second encryption keys based on whether the data is to originate from the first trust domain or the second trust domain.

[0601] Example C2 includes the computing system of example C1, wherein the instructions, when executed, cause the computing system to identify, with the CCE, that a first data write is to originate from the first trust domain, and encrypt, with the CCE, data associated with the first data write with a key of the first encryption keys.

[0602] Example C3 includes the computing system of example C2, wherein the instructions, when executed, cause the computing system to identify, with the CCE, that a second data write is to originate from the second trust domain, and encrypt, with the CCE, data associated with the second data write with a key of the second encryption keys.

[0603] Example C4 includes the computing system of example C1, wherein the instructions, when executed, cause the computing system to block the host processor from accessing the first encryption keys and the second encryption keys.

[0604] Example C5 includes the computing system of example C1, wherein the instructions, when executed, cause the computing system to store the encrypted data in the data storage.

[0605] Example C6 includes the computing system of any one of examples C1-05, wherein the CCE is be in a memory path between the first and second trust domains and the data storage.

[0606] Example C7 includes a semiconductor apparatus comprising one or more substrates, and logic coupled to the one or more substrates, wherein the logic is implemented in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to partition a plurality of encryption keys between a first trust domain and a second trust domain so that first encryption keys of the plurality of encryption keys are assigned to the first trust domain, and second encryption keys of the plurality of encryption keys are assigned to the second trust domain, wherein the first and second trust domains are to be associated with a plurality of accelerators, and encrypt, with a converged cryptographic engine (CCE), data according to the first encryption keys or the second encryption keys based on whether the data is to originate from the first trust domain or the second trust domain.

[0607] Example C8 includes the apparatus of example C7, wherein the logic coupled to the one or more substrates is to identify, with the CCE, that a first data write is to originate from the first trust domain, and encrypt, with the CCE, data associated with the first data write with a key of the first encryption keys.

[0608] Example C9 includes the apparatus of example C8, wherein the logic coupled to the one or more substrates is to identify, with the CCE, that a second data write is to originate from the second trust domain, and encrypt, with the CCE, data associated with the second data write with a key of the second encryption keys.

[0609] Example C10 includes the apparatus of example C7, wherein the logic coupled to the one or more substrates is to block a host processor from accessing the first encryption keys and the second encryption keys.

[0610] Example C11 includes the apparatus of example C9, wherein the logic coupled to the one or more substrates is to store the encrypted data in a data storage.

[0611] Example C12 includes the apparatus of any one of examples C7 to C11, wherein the CCE is be in a memory path between the first and second trust domains and a data storage.

[0612] Example C13 includes the apparatus of any one of examples C7 to C11, wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.

[0613] Example C14 includes At least one computer readable storage medium comprising a set of instructions, which when executed by a computing device, cause the computing device to partition a plurality of encryption keys between a first trust domain and a second trust domain so that first encryption keys of the plurality of encryption keys are assigned to the first trust domain, and second encryption keys of the plurality of encryption keys are assigned to the second trust domain, wherein the first and second trust domains are to be associated with a plurality of accelerators, and encrypt, with a converged cryptographic engine (CCE), data according to the first encryption keys or the second encryption keys based on whether the data is to originate from the first trust domain or the second trust domain.

[0614] Example C15 includes the at least one computer readable storage medium of example C14, wherein the instructions, when executed, cause the computing device to identify, with the CCE, that a first data write is to originate from the first trust domain, and encrypt, with the CCE, data associated with the first data write with a key of the first encryption keys.

[0615] Example C16 includes the at least one computer readable storage medium of example C15, wherein the instructions, when executed, cause the computing device to identify, with the CCE, that a second data write is to originate from the second trust domain, and encrypt, with the CCE, data associated with the second data write with a key of the second encryption keys.

[0616] Example C17 includes the at least one computer readable storage medium of example C16, wherein the instructions, when executed, cause the computing device to block a host processor from accessing the first encryption keys and the second encryption keys.

[0617] Example C18 includes the at least one computer readable storage medium of example C14, wherein the instructions, when executed, cause the computing device to store the encrypted data in a data storage.

[0618] Example C19 includes the at least one computer readable storage medium of any one of examples C14 to C18, wherein the CCE is be in a memory path between the first and second trust domains and a data storage.

[0619] Example C20 includes A method comprising partitioning a plurality of encryption keys between a first trust domain and a second trust domain so that first encryption keys of the plurality of encryption keys are assigned to the first trust domain, and second encryption keys of the plurality of encryption keys are assigned to the second trust domain, wherein the first and second trust domains are associated with a plurality of accelerators, and encrypt, with a converged cryptographic engine (CCE), data according to the first encryption keys or the second encryption keys based on whether the data originates from the first trust domain or the second trust domain.

[0620] Example C21 includes the method of example C20, further comprising identifying, with the CCE, that a first data write originates from the first trust domain, and encrypting, with the CCE, data associated with the first data write with a key of the first encryption keys.

[0621] Example C22 includes the method of example C20, further comprising identifying, with the CCE, that a second data write originates from the second trust domain, and encrypting, with the CCE, data associated with the second data write with a key of the second encryption keys.

[0622] Example C23 includes the method of example C22, further including blocking a host processor from accessing the first encryption keys and the second encryption keys.

[0623] Example C24 includes the method of example C20, further including storing the encrypted data in a data storage.

[0624] Example C25 includes the method of any one of examples C20 to C25, wherein the CCE is be in a memory path between the first and second trust domains and a data storage.

[0625] Example C26 includes an apparatus comprising means for performing the method of any one of Examples C20 to C25.

[0626] Example D1 includes a computing system comprising a host processor, a graphics processor, a memory including a set of instructions, which when executed by one or more of the host processor or the graphics processor, cause the computing system to encrypt, with the host processor, a virtual address based on a first key and a tweak, wherein the tweak is one or more fields of the virtual address, and share, with the host processor, the first key and the tweak.

[0627] Example D2 includes the computing system of Example D1, wherein the instructions, when executed, cause the computing system to decrypt, with the graphics processor, the encrypted virtual address based on the first key and the tweak.

[0628] Example D3 includes the computing system of Example D2, wherein the instructions, when executed, cause the computing system to identify, with the graphics processor, encrypted data associated with the virtual address.

[0629] Example D4 includes the computing system of Example D3, wherein the instructions, when executed, cause the computing system to decrypt, with the graphics processor, the encrypted data based on the encrypted virtual address.

[0630] Example D5 includes the computing system of Example D4, wherein the instructions, when executed, cause the computing system to decrypt, with the graphics processor, the encrypted data based on a second key.

[0631] Example D6 includes the computing system of any one of Examples D1 to D5, wherein the one or more fields are to include address bits, a size, a type, a location, an ownership, an access control, and permissions.

[0632] Example D7 includes a semiconductor apparatus comprising one or more substrates, and logic coupled to the one or more substrates, wherein the logic is implemented in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to encrypt, with a host processor, a virtual address based on a first key and a tweak, wherein the tweak is one or more fields of the virtual address, and share, with the host processor, the first key and the tweak.

[0633] Example D8 includes the apparatus of Example D7, wherein the logic coupled to the one or more substrates is to decrypt, with a graphics processor, the encrypted virtual address based on the first key and the tweak.

[0634] Example D9 includes the apparatus of Example D8, wherein the logic coupled to the one or more substrates is to identify, with the graphics processor, encrypted data associated with the virtual address.

[0635] Example D10 includes the apparatus of Example D9, wherein the logic coupled to the one or more substrates is to decrypt, with the graphics processor, the encrypted data based on the encrypted virtual address.

[0636] Example D1l includes the apparatus of Example D10, wherein the logic coupled to the one or more substrates is to decrypt, with the graphics processor, the encrypted data based on a second key.

[0637] Example D12 includes the apparatus of any one of Examples D7 to D11, wherein the one or more fields are to include address bits, a size, a type, a location, an ownership, an access control, and permissions.

[0638] Example D13 includes the apparatus of any one of Examples D7 to D11, wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.

[0639] Example D14 includes at least one computer readable storage medium comprising a set of instructions, which when executed by a computing device, cause the computing device to encrypt, with a host processor, a virtual address based on a first key and a tweak, wherein the tweak is one or more fields of the virtual address, and share, with the host processor, the first key and the tweak.

[0640] Example D15 includes the at least one computer readable storage medium of Example D14, wherein the instructions, when executed, cause the computing device to decrypt, with a graphics processor, the encrypted virtual address based on the first key and the tweak.

[0641] Example D16 includes the at least one computer readable storage medium of Example D15, wherein the instructions, when executed, cause the computing device to identify, with the graphics processor, encrypted data associated with the virtual address.

[0642] Example D17 includes the at least one computer readable storage medium of Example 16, wherein the instructions, when executed, cause the computing device to decrypt, with the graphics processor, the encrypted data based on the encrypted virtual address.

[0643] Example D18 includes the at least one computer readable storage medium of Example D17, wherein the instructions, when executed, cause the computing device to decrypt, with the graphics processor, the encrypted data based on a second key.

[0644] Example D19 includes the at least one computer readable storage medium of any one of Examples D14 to D18, wherein the one or more fields are to include address bits, a size, a type, a location, an ownership, an access control, and permissions.

[0645] Example D20 includes a method comprising encrypting, with a host processor, a virtual address based on a first key and a tweak, wherein the tweak is one or more fields of the virtual address, and sharing, with the host processor, the first key and the tweak.

[0646] Example D21 includes the method of Example D20, further comprising decrypting, with a graphics processor, the encrypted virtual address based on the first key and the tweak.

[0647] Example D22 includes the method of Example D21, further comprising identifying, with the graphics processor, encrypted data associated with the virtual address.

[0648] Example D23 includes the method of Example D22, further comprising decrypting, with the graphics processor, the encrypted data based on the encrypted virtual address.

[0649] Example D24 includes the method of Example D23, further comprising decrypting, with the graphics processor, the encrypted data based on a second key.

[0650] Example D25 includes the method of any one of Examples D20 to D24, wherein the one or more fields are to include address bits, a size, a type, a location, an ownership, an access control, and permissions.

[0651] Example D26 includes an apparatus comprising means for performing the method of any one of Examples D20 to D25.

[0652] Example E1 includes a computing system comprising a graphics processor that includes a plurality of compute engines, a plurality of target environments and root-of-trust (RoT) hardware, a memory including a set of instructions, which when executed by the graphics processor, cause the computing system to transmit, with a first target environment of the plurality of target environments, first key seeds to the compute engines, collect claims, with the first target environment, from the compute engines to generate evidence, and generate, with the compute engines, unique identity keys for each of the compute engines based on the first key seeds.

[0653] Example E2 includes the computing system of Example E1, wherein the instructions, when executed, cause the computing system to transmit, with the plurality of target environments, second key seeds to each other.

[0654] Example E3 includes the computing system of Example E2, wherein the instructions, when executed, cause the computing system to generate, with the plurality of target environments, unique identity keys based on the second key seeds.

[0655] Example E4 includes the computing system of Example E3, wherein the instructions, when executed, cause the computing system to collect, with the plurality of target environments, claims of the plurality of target environments, and generate evidence for attestation based on the claims of the plurality of target environments.

[0656] Example E5 includes the computing system of any one of Examples E1 to E4, wherein the instructions, when executed, cause the computing system to generate, with the RoT hardware, a key seed for a second target environment of the plurality of target environments.

[0657] Example E6 includes the computing system of Example E5, wherein the instructions, when executed, cause the computing system to collect claims, with the RoT hardware, from the second target environment, and generate, with the RoT hardware, evidence based on the claims collected from the second target environment.

[0658] Example E7 includes a semiconductor apparatus comprising one or more substrates, and logic coupled to the one or more substrates, wherein the logic is implemented in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to transmit, with a first target environment of a plurality of target environments of a graphics processor, first key seeds to compute engines of the graphics processor, collect claims, with the first target environment, from the compute engines to generate evidence, and generate, with the compute engines, unique identity keys for each of the compute engines based on the first key seeds.

[0659] Example E8 includes the apparatus of Example E7, wherein the logic coupled to the one or more substrates is to transmit, with the plurality of target environments, second key seeds to each other.

[0660] Example E9 includes the apparatus of Example E8, wherein the logic coupled to the one or more substrates is to generate, with the plurality of target environments, unique identity keys based on the second key seeds.

[0661] Example E10 includes the apparatus of Example E9, wherein the logic coupled to the one or more substrates is to collect, with the plurality of target environments, claims of the plurality of target environments, and generate evidence for attestation based on the claims of the plurality of target environments.

[0662] Example e11 includes the apparatus of any one of Examples E7 to E10, wherein the logic coupled to the one or more substrates is to generate, with a RoT hardware of the graphics processor, a key seed for a second target environment of the plurality of target environments.

[0663] Example E12 includes the apparatus of Example E11, wherein the logic coupled to the one or more substrates is to collect claims, with the RoT hardware, from the second target environment, and generate, with the RoT hardware, evidence based on the claims collected from the second target environment.

[0664] Example E13 includes the apparatus of any one of Examples E7 to E11, wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.

[0665] Example E14 includes at least one computer readable storage medium comprising a set of instructions, which when executed by a computing device, cause the computing device to transmit, with a first target environment of a plurality of target environments of a graphics processor, first key seeds to compute engines of the graphics processor, collect claims, with the first target environment, from the compute engines to generate evidence, and generate, with the compute engines, unique identity keys for each of the compute engines based on the first key seeds.

[0666] Example E15 includes the at least one computer readable storage medium of Example E14, wherein the instructions, when executed, cause the computing device to transmit, with the plurality of target environments, second key seeds to each other.

[0667] Example E16 includes the at least one computer readable storage medium of Example E15, wherein the instructions, when executed, cause the computing device to generate, with the plurality of target environments, unique identity keys based on the second key seeds.

[0668] Example E17 includes The at least one computer readable storage medium of Example E16, wherein the instructions, when executed, cause the computing device to collect, with the plurality of target environments, claims of the plurality of target environments, and generate evidence for attestation based on the claims of the plurality of target environments.

[0669] Example E18 includes the at least one computer readable storage medium of any one of Examples E14 to E17, wherein the instructions, when executed, cause the computing device to generate, with a RoT hardware of the graphics processor, a key seed for a second target environment of the plurality of target environments.

[0670] Example E19 includes the at least one computer readable storage medium of Example E18, wherein the instructions, when executed, cause the computing device to collect claims, with the RoT hardware, from the second target environment, and generate, with the RoT hardware, evidence based on the claims collected from the second target environment.

[0671] Example E20 includes a method comprising transmitting, with a first target environment of a plurality of target environments of a graphics processor, first key seeds to compute engines of the graphics processor, collecting claims, with the first target environment, from the compute engines to generate evidence, and generating, with the compute engines, unique identity keys for each of the compute engines based on the first key seeds.

[0672] Example E21 includes the method of Example E20, further comprising transmitting, with the plurality of target environments, second key seeds to each other.

[0673] Example E22 includes the method of Example E21, further comprising generating, with the plurality of target environments, unique identity keys based on the second key seeds.

[0674] Example E23 includes the method of Example E22, further comprising collecting, with the plurality of target environments, claims of the plurality of target environments, and generating evidence for attestation based on the claims of the plurality of target environments.

[0675] Example E24 includes the method of Examples E20 to E23, further comprising generating, with a RoT hardware of the graphics processor, a key seed for a second target environment of the plurality of target environments.

[0676] Example E25 includes the method of Example E24, further comprising collecting claims, with the RoT hardware, from the second target environment, and generating, with the RoT hardware, evidence based on the claims collected from the second target environment.

[0677] Example E26 includes an apparatus comprising means for performing the method of any one of Examples E20 to E25.

[0678] Example F1 includes a computing system comprising a host processor to execute a host operating system, a graphics processor, a memory including a set of instructions, which when executed by one or more of the graphics processor or the host processor, cause the computing system to generate, with a virtual machine, confidential data to be rendered, encrypt, with one or more of the graphics processor or the virtual machine, the confidential data according to a first encryption key to generate encrypted confidential data, store the encrypted confidential data in a first buffer, and decrypt, with the graphics processor, the encrypted confidential data to generate decrypted confidential information.

[0679] Example F2 includes the computing system of Example F1, wherein the instructions, when executed, further cause the one or more of the graphics processor or the host processor to conduct a verification process with a trusted execution environment to prove an identity of the virtual machine, receive, with the virtual machine, a session key from the trusted execution environment, wherein the session key is to be the first encryption key, and receive, with the graphics processor, the session key from the trusted execution environment.

[0680] Example F3 includes the computing system of Example F1, wherein the instructions, when executed, further cause the graphics processor to generate the first encryption key.

[0681] Example F4 includes the computing system of Example F1, wherein the instructions, when executed, further cause one or more of the graphics processor or the host processor to composite the decrypted confidential data with application data to generate composited confidential and application data, wherein the application data is to be associated with one or more application to be executed on the host operating system, encrypt the composited confidential and application data according to a second encryption key to generate encrypted composited confidential and application data, wherein the second encryption key is to be different from the first encryption key, and store the encrypted composited confidential and application data in a second buffer that is to be different than the first buffer.

[0682] Example F5 includes the computing system of Example F4, wherein the instructions, when executed, further cause one or more of the graphics processor or the host processor to in response to an identification that the encrypted composited confidential and application data is to be displayed, decrypt the encrypted composited confidential and application data according to the second encryption key.

[0683] Example F6 includes the computing system of any one of Examples F1 to F5, wherein the first encryption key is to be a private symmetric digital rights management (DRM) session key.

[0684] Example F7 includes a semiconductor apparatus comprising one or more substrates, and logic coupled to the one or more substrates, wherein the logic is implemented in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to generate, with a virtual machine, confidential data to be rendered, encrypt, with one or more of a graphics processor or the virtual machine, the confidential data according to a first encryption key to generate encrypted confidential data, store the encrypted confidential data in a first buffer, and decrypt, with the graphics processor, the encrypted confidential data to generate decrypted confidential information.

[0685] Example F8 includes the apparatus of Example F7, wherein the logic coupled to the one or more substrates is to conduct a verification process with a trusted execution environment to prove an identity of the virtual machine, receive, with the virtual machine, a session key from the trusted execution environment, wherein the session key is to be the first encryption key, and receive, with the graphics processor, the session key from the trusted execution environment.

[0686] Example F9 includes the apparatus of Example F7, wherein the logic coupled to the one or more substrates is to generate the first encryption key.

[0687] Example F10 includes the apparatus of Example F7, wherein the logic coupled to the one or more substrates is to composite the decrypted confidential data with application data to generate composited confidential and application data, wherein the application data is to be associated with one or more application to be executed on a host operating system, encrypt the composited confidential and application data according to a second encryption key to generate encrypted composited confidential and application data, wherein the second encryption key is to be different from the first encryption key, and store the encrypted composited confidential and application data in a second buffer that is to be different than the first buffer.

[0688] Example F11 includes the apparatus of Example F10, wherein the logic coupled to the one or more substrates is to in response to an identification that the encrypted composited confidential and application data is to be displayed, decrypt the encrypted composited confidential and application data according to the second encryption key.

[0689] Example F12 includes the apparatus of any one of Examples F7 to F11, wherein the first encryption key is to be a private symmetric digital rights management (DRM) session key.

[0690] Example F13 includes the apparatus of any one of Examples F7 to F11, wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.

[0691] Example F14 includes at least one computer readable storage medium comprising a set of instructions, which when executed by a computing device, cause the computing device to generate, with a virtual machine, confidential data to be rendered, encrypt, with one or more of a graphics processor or the virtual machine, the confidential data according to a first encryption key to generate encrypted confidential data, store the encrypted confidential data in a first buffer, and decrypt, with the graphics processor, the encrypted confidential data to generate decrypted confidential information.

[0692] Example F15 includes the at least one computer readable storage medium of Example F14, wherein the instructions, when executed, cause the computing device to conduct a verification process with a trusted execution environment to prove an identity of the virtual machine, receive, with the virtual machine, a session key from the trusted execution environment, wherein the session key is to be the first encryption key, and receive, with the graphics processor, the session key from the trusted execution environment.

[0693] Example F16 includes the at least one computer readable storage medium of Example F14, wherein the instructions, when executed, cause the computing device to generate the first encryption key.

[0694] Example F17 includes the at least one computer readable storage medium of Example F14, wherein the instructions, when executed, cause the computing device to composite the decrypted confidential data with application data to generate composited confidential and application data, wherein the application data is to be associated with one or more application to be executed on a host operating system, encrypt the composited confidential and application data according to a second encryption key to generate encrypted composited confidential and application data, wherein the second encryption key is to be different from the first encryption key, and store the encrypted composited confidential and application data in a second buffer that is to be different than the first buffer.

[0695] Example F18 includes the at least one computer readable storage medium of Example F17, wherein the instructions, when executed, cause the computing device to in response to an identification that the encrypted composited confidential and application data is to be displayed, decrypt the encrypted composited confidential and application data according to the second encryption key.

[0696] Example F19 includes the at least one computer readable storage medium of any one of Examples F14 to F18, wherein the first encryption key is to be a private symmetric digital rights management (DRM) session key.

[0697] Example F20 includes a method comprising generating, with a virtual machine, confidential data that will be rendered, encrypting, with one or more of a graphics processor or the virtual machine, the confidential data according to a first encryption key to generate encrypted confidential data, storing the encrypted confidential data in a first buffer, and decrypting, with the graphics processor, the encrypted confidential data to generate decrypted confidential information.

[0698] Example F21 includes the method of Example F20, further comprising conducting a verification process with a trusted execution environment to prove an identity of the virtual machine, receiving, with the virtual machine, a session key from the trusted execution environment, wherein the session key is to be the first encryption key, and receiving, with the graphics processor, the session key from the trusted execution environment.

[0699] Example F22 includes the method of Example F20, further comprising generating the first encryption key.

[0700] Example F23 includes the method of Example F20, further comprising compositing the decrypted confidential data with application data to generate composited confidential and application data, wherein the application data is associated with one or more application to be executed on a host operating system, encrypting the composited confidential and application data according to a second encryption key to generate encrypted composited confidential and application data, wherein the second encryption key is different from the first encryption key, and storing the encrypted composited confidential and application data in a second buffer that is different than the first buffer.

[0701] Example F24 includes the method of Example F23, further comprising in response to an identification that the encrypted composited confidential and application data will be displayed, decrypting the encrypted composited confidential and application data according to the second encryption key.

[0702] Example F25 includes the method of any one of Examples F20 to F24, wherein the first encryption key is to be a private symmetric digital rights management (DRM) session key.

[0703] Example F26 includes an apparatus comprising means for performing the method of any one of Examples F20 to F25.

[0704] Example G1 includes a computing system comprising a non-volatile storage, a host processor, a graphics processor, and a memory including a set of instructions, which when executed by one or more of the graphics processor or the host processor, cause the computing system to identify that first data is to be in a first format, wherein the first format is to be a physical address based encryption format, convert, with the graphics processor, the first data from the first format to a second format, wherein the second format is to be a physical address agnostic encryption format, and page-out the first data, that is to be in the second format, from the memory to the non-volatile storage.

[0705] Example G2 includes the computing system of Example G1, wherein the instructions, when executed, cause the computing system to increment a global counter in response to an identification that the first data is to be paged-out.

[0706] Example G3 includes the computing system of any one of Examples G1 to G2, wherein the instructions, when executed, cause the computing system to generate a message authentication code (MAC) value based on the first data that is to be in the second format.

[0707] Example G4 includes the computing system of Example G3, wherein the instructions, when executed, cause the computing system to store the MAC value in a protected memory.

[0708] Example G5 includes the computing system of Example G1, wherein the instructions, when executed, cause the computing system to page-in second data from a storage, calculate a message authentication code (MAC) value based on the second data, and compare the MAC value of the second data to a MAC value of the first data to determine whether the second data is to correspond to the first data.

[0709] Example G6 includes the computing system of Example G5, wherein the instructions, when executed, cause the computing system to execute one or more operations based on the second data when the MAC value of the second data being the same as the MAC value of the first data, and bypass one or more operations based on the second data when the MAC value of the second data being dissimilar from the MAC value of the first data.

[0710] Example G7 includes a semiconductor apparatus comprising one or more substrates, and logic coupled to the one or more substrates, wherein the logic is implemented in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to identify that first data is to be in a first format, wherein the first format is to be a physical address based encryption format, convert, with a graphics processor, the first data from the first format to a second format, wherein the second format is to be a physical address agnostic encryption format, and page-out the first data, that is to be in the second format, from a memory to a non-volatile storage.

[0711] Example G8 includes the apparatus of Example G7, wherein the logic coupled to the one or more substrates is to increment a global counter in response to an identification that the first data is to be paged-out.

[0712] Example G9 includes the apparatus of any one of Examples G7 to G8, wherein the logic coupled to the one or more substrates is to generate a message authentication code (MAC) value based on the first data that is to be in the second format.

[0713] Example G10 includes the apparatus of Example G9, wherein the logic coupled to the one or more substrates is to store the MAC value in a protected memory.

[0714] Example G11 includes the apparatus of Example G7, wherein the logic coupled to the one or more substrates is to page-in second data from a storage, calculate a message authentication code (MAC) value based on the second data, and compare the MAC value of the second data to a MAC value of the first data to determine whether the second data is to correspond to the first data.

[0715] Example G12 includes the apparatus of Example G11, wherein the logic coupled to the one or more substrates is to execute one or more operations based on the second data when the MAC value of the second data being the same as the MAC value of the first data, and bypass one or more operations based on the second data when the MAC value of the second data being dissimilar from the MAC value of the first data.

[0716] Example G13 includes the apparatus of any one of Examples G7 to G11, wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.

[0717] Example G14 includes at least one computer readable storage medium comprising a set of instructions, which when executed by a computing device, cause the computing device to identify that first data is to be in a first format, wherein the first format is to be a physical address based encryption format, convert, with a graphics processor, the first data from the first format to a second format, wherein the second format is to be a physical address agnostic encryption format, and page-out the first data, that is to be in the second format, from a memory to a non-volatile storage.

[0718] Example G15 includes The at least one computer readable storage medium of Example G14, wherein the instructions, when executed, cause the computing device to increment a global counter in response to an identification that the first data is to be paged-out.

[0719] Example G16 includes the at least one computer readable storage medium of any one of Examples G14 to G15, wherein the instructions, when executed, cause the computing device to generate a message authentication code (MAC) value based on the first data that is to be in the second format.

[0720] Example G17 includes the at least one computer readable storage medium of Example G16, wherein the instructions, when executed, cause the computing device to store the MAC value in a protected memory.

[0721] Example G18 includes the at least one computer readable storage medium of Example G14, wherein the instructions, when executed, cause the computing device to page-in second data from a storage, calculate a message authentication code (MAC) value based on the second data, and compare the MAC value of the second data to a MAC value of the first data to determine whether the second data is to correspond to the first data.

[0722] Example G19 includes the at least one computer readable storage medium of Example G18, wherein the instructions, when executed, cause the computing device to execute one or more operations based on the second data when the MAC value of the second data being the same as the MAC value of the first data, and bypass one or more operations based on the second data when the MAC value of the second data being dissimilar from the MAC value of the first data.

[0723] Example G20 includes a method comprising identifying that first data is to be in a first format, wherein the first format is a physical address based encryption format, converting, with a graphics processor, the first data from the first format to a second format, wherein the second format is a physical address agnostic encryption format, and paging-out the first data, that is in the second format, from a memory to a non-volatile storage.

[0724] Example G21 includes the method of Example G20, further comprising incrementing a global counter in response to an identification that the first data is to be paged-out.

[0725] Example G22 includes the method of any one of Examples G20 to G21, further comprising generating a message authentication code (MAC) value based on the first data that is to be in the second format.

[0726] Example G23 includes the method of Example G22, further comprising storing the MAC value in a protected memory.

[0727] Example G24 includes the method of Example G20, further comprising paging-in second data from a storage, calculating a message authentication code (MAC) value based on the second data, and comparing the MAC value of the second data to a MAC value of the first data to determine whether the second data is to correspond to the first data.

[0728] Example G25 includes the method of Example G24, further comprising executing one or more operations based on the second data when the MAC value of the second data being the same as the MAC value of the first data, and bypassing one or more operations based on the second data when the MAC value of the second data being dissimilar from the MAC value of the first data.

[0729] Example G26 includes an apparatus comprising means for performing the method of any one of Examples G20 to G25.

[0730] Embodiments are applicable for use with all types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, systems on chip (SoCs), SSD/NAND controller ASICs, and the like. In addition, in some of the drawings, signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.

[0731] Example sizes/models/values/ranges may have been given, although embodiments are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments, it should be apparent to one skilled in the art that embodiments can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting.

[0732] The term "coupled" may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms "first", "second", etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.

[0733] As used in this application and in the claims, a list of items joined by the term "one or more of" may mean any combination of the listed terms. For example, the phrase "one or more of A, B, and C" and the phrase "one or more of A, B, or C" both may mean A; B; C; A and B; A and C; B and C; or A, B and C.

[0734] Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments can be implemented in a variety of forms. Therefore, while the embodiments have been described in connection with particular examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed