Anonymous wrote:It all kind of begs the question whether one of the parents works on optical computing as their day job at one of the several local Federal/Federally-funded labs with active work in that area…
Anonymous wrote:Anonymous wrote:https://arxiv.org/abs/2208.06749
Tensor algebra lies at the core of computational science and machine learning. Due to its high usage, entire libraries exist dedicated to improving its performance. Conventional tensor algebra performance boosts focus on algorithmic optimizations, which in turn lead to incremental improvements. In this paper, we describe a method to accelerate tensor algebra a different way: by outsourcing operations to an optical microchip. We outline a numerical programming language developed to perform tensor algebra computations that is designed to leverage our optical hardware's full potential. We introduce the language's current grammar and go over the compiler design. We then show a new way to store sparse rank-n tensors in RAM that outperforms conventional array storage (used by C++, Java, etc.). This method is more memory-efficient than Compressed Sparse Fiber (CSF) format and is specifically tuned for our optical hardware. Finally, we show how the scalar-tensor product, rank-$n$ Kronecker product, tensor dot product, Khatri-Rao product, face-splitting product, and vector cross product can be compiled into operations native to our optical microchip through various tensor decompositions.
Impressive that FCPS kids are showing "a new way to store sparse rank-n tensors in RAM that outperforms conventional array storage".
https://arxiv.org/pdf/2208.06749.pdf
I'm trying to understand the discussion in this paper, but it's beyond my grasp. Is Tensor Algebra taught in one of the TJ math courses?
It's an algorithm that would turn fundamental deep learning operations currently run on GPUs into operations that could run on a hypothetical optical microchipAnonymous wrote:Anonymous wrote:https://arxiv.org/abs/2208.06749
Tensor algebra lies at the core of computational science and machine learning. Due to its high usage, entire libraries exist dedicated to improving its performance. Conventional tensor algebra performance boosts focus on algorithmic optimizations, which in turn lead to incremental improvements. In this paper, we describe a method to accelerate tensor algebra a different way: by outsourcing operations to an optical microchip. We outline a numerical programming language developed to perform tensor algebra computations that is designed to leverage our optical hardware's full potential. We introduce the language's current grammar and go over the compiler design. We then show a new way to store sparse rank-n tensors in RAM that outperforms conventional array storage (used by C++, Java, etc.). This method is more memory-efficient than Compressed Sparse Fiber (CSF) format and is specifically tuned for our optical hardware. Finally, we show how the scalar-tensor product, rank-$n$ Kronecker product, tensor dot product, Khatri-Rao product, face-splitting product, and vector cross product can be compiled into operations native to our optical microchip through various tensor decompositions.
Impressive that FCPS kids are showing "a new way to store sparse rank-n tensors in RAM that outperforms conventional array storage".
That is cool, but how is this related to the light-powered super chip?
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:It all kind of begs the question whether one of the parents works on optical computing as their day job at one of the several local Federal/Federally-funded labs with active work in that area…
A lot of TJ students have a parent who is feeding them ideas from their workplace. Or outright doing the work for them.
what's next? have you heard of parents going and sitting in the TJ classrooms to also do classwork?
No but they do their homework for them.
Anonymous wrote:Anonymous wrote:Anonymous wrote:It all kind of begs the question whether one of the parents works on optical computing as their day job at one of the several local Federal/Federally-funded labs with active work in that area…
A lot of TJ students have a parent who is feeding them ideas from their workplace. Or outright doing the work for them.
what's next? have you heard of parents going and sitting in the TJ classrooms to also do classwork?
Anonymous wrote:Anonymous wrote:It all kind of begs the question whether one of the parents works on optical computing as their day job at one of the several local Federal/Federally-funded labs with active work in that area…
You have no shame, do you? Imbeciles like you seem to derive sadistic pleasure from casting doubt on the exceptional efforts of students, insinuating that they must have been carried out by adults instead.
Anonymous wrote:It all kind of begs the question whether one of the parents works on optical computing as their day job at one of the several local Federal/Federally-funded labs with active work in that area…
Anonymous wrote:Anonymous wrote:Instead of showing racist attitude towards Indian American community, let's be thankful for their service and contributions to the tech world:
Sanjay Mehrotra is the CEO of Micron Technology; Shantanu Narayen is the CEO of Adobe; Satya Nadella is Chairman and CEO of Microsoft; Sunder Pichai is the CEO of Alphabet and Google; Jay Chaudhry is the CEO of Zscaler which is a cloud security company; Arvind Krishna is the CEO of IBM; Neal Mohan is the CEO of YouTube; and George Kurian is the CEO of NetApp, among the top tech giants.
None of these companies was created by them though.
Anonymous wrote:Anonymous wrote:https://arxiv.org/abs/2208.06749
Tensor algebra lies at the core of computational science and machine learning. Due to its high usage, entire libraries exist dedicated to improving its performance. Conventional tensor algebra performance boosts focus on algorithmic optimizations, which in turn lead to incremental improvements. In this paper, we describe a method to accelerate tensor algebra a different way: by outsourcing operations to an optical microchip. We outline a numerical programming language developed to perform tensor algebra computations that is designed to leverage our optical hardware's full potential. We introduce the language's current grammar and go over the compiler design. We then show a new way to store sparse rank-n tensors in RAM that outperforms conventional array storage (used by C++, Java, etc.). This method is more memory-efficient than Compressed Sparse Fiber (CSF) format and is specifically tuned for our optical hardware. Finally, we show how the scalar-tensor product, rank-$n$ Kronecker product, tensor dot product, Khatri-Rao product, face-splitting product, and vector cross product can be compiled into operations native to our optical microchip through various tensor decompositions.
Impressive that FCPS kids are showing "a new way to store sparse rank-n tensors in RAM that outperforms conventional array storage".
https://arxiv.org/pdf/2208.06749.pdf
I'm trying to understand the discussion in this paper, but it's beyond my grasp. Is Tensor Algebra taught in one of the TJ math courses?
Anonymous wrote:Anonymous wrote:It all kind of begs the question whether one of the parents works on optical computing as their day job at one of the several local Federal/Federally-funded labs with active work in that area…
A lot of TJ students have a parent who is feeding them ideas from their workplace. Or outright doing the work for them.
Anonymous wrote:It all kind of begs the question whether one of the parents works on optical computing as their day job at one of the several local Federal/Federally-funded labs with active work in that area…
Anonymous wrote:Anonymous wrote:*big eye roll* I don't deny that these kids are smart and entrepreneurial, but the marketing hype is just cringy.
typical south East Indians.
part of the culture.
Anonymous wrote:It all kind of begs the question whether one of the parents works on optical computing as their day job at one of the several local Federal/Federally-funded labs with active work in that area…