loadpatents
name:-0.009335994720459
name:-0.0057048797607422
name:-0.0014281272888184
Nair; Krishnakumar Narayanan Patent Filings

Nair; Krishnakumar Narayanan

Patent Applications and Registrations

Patent applications and USPTO patent grants for Nair; Krishnakumar Narayanan.The latest application filed is for "floating point multiply hardware using decomposed component numbers".

Company Profile
3.10.18
  • Nair; Krishnakumar Narayanan - Newark CA
*profile and listings may contain filings by different individuals or companies with the same name. Review application materials to confirm ownership/assignment.
Patent Activity
PatentDate
Using a low-bit-width dot product engine to sum high-bit-width numbers
Grant 11,455,143 - Ulrich , et al. September 27, 2
2022-09-27
Pipelined pointwise convolution using per-channel convolution operations
Grant 11,443,013 - Komuravelli , et al. September 13, 2
2022-09-13
High throughput matrix processor with support for concurrently processing multiple matrices
Grant 11,409,838 - Nair , et al. August 9, 2
2022-08-09
Device and method for flexibly summing matrix values
Grant 11,379,557 - Nair , et al. July 5, 2
2022-07-05
Matrix processing instruction with optional up/down sampling of matrix
Grant 11,372,644 - Ulrich , et al. June 28, 2
2022-06-28
Floating Point Multiply Hardware Using Decomposed Component Numbers
App 20220107782 - Nair; Krishnakumar Narayanan ;   et al.
2022-04-07
Hardware for floating-point arithmetic in multiple formats
Grant 11,275,560 - Ulrich , et al. March 15, 2
2022-03-15
Floating point multiply hardware using decomposed component numbers
Grant 11,188,303 - Nair , et al. November 30, 2
2021-11-30
Device And Method For Flexibly Summing Matrix Values
App 20210349965 - Nair; Krishnakumar Narayanan ;   et al.
2021-11-11
Using A Low-bit-width Dot Product Engine To Sum High-bit-width Numbers
App 20210349690 - Ulrich; Thomas Mark ;   et al.
2021-11-11
Mapping Convolution To Connected Processing Elements Using Distributed Pipelined Separable Convolution Operations
App 20210334072 - Komuravelli; Rakesh ;   et al.
2021-10-28
High Bandwidth Memory System With Distributed Request Broadcasting Masters
App 20210326051 - Diril; Abdulkadir Utku ;   et al.
2021-10-21
Grouped Convolution Using Point-to-point Connected Channel Convolution Engines
App 20210319076 - Komuravelli; Rakesh ;   et al.
2021-10-14
Pipelined Pointwise Convolution Using Per-channel Convolution Operations
App 20210294875 - Komuravelli; Rakesh ;   et al.
2021-09-23
Systems and methods for reducing power consumption of convolution operations for artificial neural networks
Grant 11,120,328 - Nair September 14, 2
2021-09-14
Mapping Convolution To A Partition Channel Convolution Engine
App 20210271451 - Nair; Krishnakumar Narayanan ;   et al.
2021-09-02
Mapping Convolution To A Channel Convolution Engine
App 20210256363 - Nair; Krishnakumar Narayanan ;   et al.
2021-08-19
Hardware For Floating-point Arithmetic In Multiple Formats
App 20210255830 - Ulrich; Thomas Mark ;   et al.
2021-08-19
High bandwidth memory system with distributed request broadcasting masters
Grant 11,054,998 - Diril , et al. July 6, 2
2021-07-06
Systems And Methods For Reducing Data Movement During Convolution Operations In Artificial Neural Networks
App 20210192359 - Khish Ardestani Zadeh; Ehsan ;   et al.
2021-06-24
High Bandwidth Memory System With Crossbar Switch For Dynamically Programmable Distribution Scheme
App 20210182196 - Wu; Olivia ;   et al.
2021-06-17
High Bandwidth Memory System With Distributed Request Broadcasting Masters
App 20210181957 - Diril; Abdulkadir Utku ;   et al.
2021-06-17
Hardware Accelerated Matrix Manipulation Operations Using Processor Instructions
App 20210173646 - Ulrich; Thomas Mark ;   et al.
2021-06-10
High Bandwidth Memory System With Dynamically Programmable Distribution Scheme
App 20210165691 - Diril; Abdulkadir Utku ;   et al.
2021-06-03
High Throughput Matrix Processor With Support For Concurrently Processing Multiple Matrices
App 20210124794 - Nair; Krishnakumar Narayanan ;   et al.
2021-04-29
Support For Different Matrix Multiplications By Selecting Adder Tree Intermediate Results
App 20210125044 - Hao; Yuchen ;   et al.
2021-04-29
Floating Point Multiply Hardware Using Decomposed Component Numbers
App 20210103429 - Nair; Krishnakumar Narayanan ;   et al.
2021-04-08
Memory organization for matrix processing
Grant 10,872,038 - Nair , et al. December 22, 2
2020-12-22

uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed